Commit Graph

30 Commits

Author SHA1 Message Date
thorpej
c2ce79c0c9 In _pool_put(), panic if we're put'ing with nout == 0. This will help us
detect a little earlier if we've dup-put'd.  Otherwise, underflow occurs,
and subsequent allocations simply hang or fail (it thinks the hardlimit
has been reached).
1999-08-29 00:26:01 +00:00
sommerfeld
b8e4538f80 Create new pool flag PR_LIMITFAIL, indicating that even PR_WAIT
allocations should fail if the pool is at its hard limit.
Document flag in pool(9).
Use it in mbuf.h for the first allocate call for M_GET, M_GETHDR, and
MCLGET, so that m_reclaim gets called even for blocking allocations.
1999-08-05 04:00:03 +00:00
thorpej
cd992b17df In _pool_put(), call simple_lock_freecheck() if we're LOCKDEBUG before
we put the item on the free list.
1999-07-27 21:31:17 +00:00
pk
62cb666f4a Guard our global resource `phpool' against all interrupts. 1999-06-06 22:20:15 +00:00
thorpej
6c37e2b392 Make sure page allocations are counted everywhere that they need to be. 1999-05-10 21:15:42 +00:00
thorpej
4b6d8943c2 Improve the pool allocator's diagnostic helpers, adding the ability to
log on a per-pool basis, reentrancy checking, and dumping various pool
information from DDB.
1999-05-10 21:13:05 +00:00
scottr
3d5c979e43 Pull in opt_poollog.h for POOL_LOGSIZE. 1999-04-29 17:47:19 +00:00
thorpej
b2741be06e More locking protocol fixes. Protect pool_head with a spin lock (statically
initialized).  This lock also protects the "next drain candidate" pointer.

XXX There is still one locking protocol problem, which should not be
a problem in practice, but is still marked as an issue in the code anyhow.
1999-04-06 23:32:44 +00:00
chs
c109816333 Undo the part of the last revision about pr_rmpage() referencing
a data structure after it was freed.  This wasn't actually a problem,
and the change caused the wrong pool_item_header to be freed
in the non-PR_PHINPAGE case.
1999-04-04 17:17:31 +00:00
thorpej
278e7ae222 Yet more fixes to the pool allocator:
- Protect userspace from unnecessary header inclusions (as noted on
current-users).

- Some const poisioning.

- GREATLY simplify the locking protocol, and fix potential deadlock
scenarios.  In particular, assume that the back-end page allocator
provides its own locking mechanism (this is currently true for all
such allocators in the NetBSD kernel).  Doing so allows us to simply
use one spin lock for serialized access to all r/w members of the pool
descriptor.  The spin lock is released before calling the back-end
allocator, and re-acquired upon return from it.

- Fix a problem in pr_rmpage() where a data structure was referenced
after it was freed.

- Minor tweak to page manaement.  Migrate both idle and empty pages
to the end of the page list.  As soon as a page becomes un-empty
(by a pool_put()), place it at the head of the page list, and set
curpage to point to it.  This reduces fragmentation as well as the
time required to find a non-empty page as soon as curpage becomes
empty again.

- Use mono_time throughout, and protect access to it w/ splclock().

- In pool_reclaim(), if freeing an idle page would reduce the number
of allocatable items to below the low water mark, don't.
1999-03-31 23:23:47 +00:00
thorpej
d4d4e314e9 Fix several bugs/deficiencies in the pool allocator:
- Add support for hard limits, with optional rate-limited logging of
a warning message when the pool limit is reached.  (This will be used
to fix a bug in mbuf cluster allocation on the MIPS and Alpha ports.)

- Fix some locking protocol errors.  This required splitting pr_flags
into pr_flags (which is protected by the spin lock) and pr_roflags (which
are `read only' flags, set when the pool is initialized, and never changed
again; these do not need to be protected by a mutex).

- Make the low water support actually mean something.  When a low water
mark is set, add free items to the pool until the low water mark is
reached.  When an item allocation causes the number of free items to
drop below the low water mark, make the pool catch up to it.  This can
make the pool allocator more useful for several applications (e.g.
pmap `pv entry' management) and more robust for others (for e.g. mbuf
and mbuf cluster allocation, so that the pagedaemon can use NFS to clean
pages on diskless systems without completely running dry on buffers to
receive packets in during extreme memory shoratages).

- Add a comment where we sleep waiting for more pages for the back-end
page allocator.  Specifically, instead of sleeping potentially forever,
perhaps we should just wake up once a second to try allocating a page
again.  XXX Revisit this soon.
1999-03-31 01:14:06 +00:00
mrg
d2397ac5f7 completely remove Mach VM support. all that is left is the all the
header files as UVM still uses (most of) these.
1999-03-24 05:50:49 +00:00
thorpej
9614a68c70 Fix the order of arguments to roundup(). 1999-03-23 02:49:03 +00:00
thorpej
e1315a2447 Make this compile with POOL_DIAGNOSTIC, and add a POOL_LOGSIZE option.
Defopt these.
1998-12-27 21:13:43 +00:00
briggs
4a01b776e5 Prototype pool_print() and pool_chk() if DEBUG.
Initialize pool hash table with PR_HASHTABSIZE (i.e., 8) LIST_INIT()s
instead of one memset().
Only check for page != ph->ph_page if PR_PHINPAGE is set (in pool_chk()).
Print pool base pointer when reporting page inconsistency in pool_chk().
1998-12-16 04:28:23 +00:00
pk
25e37f3b97 In addition to the spinlock, use the lockmgr() to serialize access to
the back-end page allocator. This allows the back-end to sleep since we
now relinquish the spin lock after acquiring the long-term lock.
1998-09-29 18:09:29 +00:00
thorpej
c0dd0b8353 Make sure the size is large enough to hold a pool_item. 1998-09-22 03:01:29 +00:00
christos
34c5a58bb4 Make copyrights consistent; fix weird/trailing spaces add missing (c) etc. 1998-09-12 17:20:02 +00:00
thorpej
f1f6ec6afe Add an alternate pool page allocator that can be used if the pool is
never accessed in interrupt context.  In the UVM case, this uses the
kernel_map, to reduce usage of the previous kmem_map resource.
1998-08-28 21:18:37 +00:00
thorpej
77d0a69569 Add a waitok boolean argument to the VM system's pool page allocator backend. 1998-08-28 20:05:48 +00:00
eeh
a2dd74ed79 Merge paddr_t changes into the main branch. 1998-08-13 02:10:37 +00:00
perry
275d1554aa Abolition of bcopy, ovbcopy, bcmp, and bzero, phase one.
bcopy(x, y, z) ->  memcpy(y, x, z)
ovbcopy(x, y, z) -> memmove(y, x, z)
   bcmp(x, y, z) ->  memcmp(x, y, z)
  bzero(x, y)    ->  memset(x, 0, y)
1998-08-04 04:03:10 +00:00
thorpej
85b7cfc8c3 Make sure we initialize pr_nidle. 1998-08-02 19:07:47 +00:00
thorpej
6f739e1a66 Fix a braino in the idle page instrumentation. 1998-08-02 04:34:46 +00:00
thorpej
7c61b8cdd8 Instrument "idle pages" (i.e. pages which have no items allocated from
them, and could thus be freed back to the system).
1998-08-01 23:44:20 +00:00
thorpej
fe7696eacb Un-static pool_head; vmstat wants to find it. 1998-07-31 21:55:09 +00:00
thorpej
fc4828b0b4 A few small changes to how pool pages are allocated/freed:
- If either an alloc or release function is provided, make sure both are
  provided, otherwise panic, as this is a fatal error.
- If using the default allocator, default the pool pagesz to PAGE_SIZE,
  since that is the granularity of the default allocator's mechanism.
- In the default allocator, use new functions:
	uvm_km_alloc_poolpage()/uvm_km_free_poolpage(), or
	kmem_alloc_poolpage()/kmem_free_poolpage()
  rather than doing it here.  These functions may use pmap hooks to
  provide alternate methods of mapping pool pages.
1998-07-24 20:19:23 +00:00
pk
e32923a128 Re-vamped pool manager.
* support for customized memory supplier
	* automatic page reclaim by VM system
	* time-based hysteresis
	* cache coloring (after Bonwick's "slabs")
1998-07-23 20:34:00 +00:00
pk
201f7cf6b4 Add option to use "static" storage provided by the caller.
From Matthias Drochner.
1998-02-19 23:51:48 +00:00
pk
327c0046f9 Memory pool resource utility. 1997-12-15 11:14:57 +00:00