m_reclaim() to match the drain hook signature. This allows us to
delete m_retry() and m_retryhdr(), as the pool allocator will now
perform the reclaimation step for us.
From art@openbsd.org.
deal with shortages of the VM maps where the backing pages are mapped
(usually kmem_map). Try to deal with this:
* Group all information about the backend allocator for a pool in a
separate structure. The pool references this structure, rather than
the individual fields.
* Change the pool_init() API accordingly, and adjust all callers.
* Link all pools using the same backend allocator on a list.
* The backend allocator is responsible for waiting for physical memory
to become available, but will still fail if it cannot callocate KVA
space for the pages. If this happens, carefully drain all pools using
the same backend allocator, so that some KVA space can be freed.
* Change pool_reclaim() to indicate if it actually succeeded in freeing
some pages, and use that information to make draining easier and more
efficient.
* Get rid of PR_URGENT. There was only one use of it, and it could be
dealt with by the caller.
From art@openbsd.org.
- remove special treatment of pager_map mappings in pmaps. this is
required now, since I've removed the globals that expose the address range.
pager_map now uses pmap_kenter_pa() instead of pmap_enter(), so there's
no longer any need to special-case it.
- eliminate struct uvm_vnode by moving its fields into struct vnode.
- rewrite the pageout path. the pager is now responsible for handling the
high-level requests instead of only getting control after a bunch of work
has already been done on its behalf. this will allow us to UBCify LFS,
which needs tighter control over its pages than other filesystems do.
writing a page to disk no longer requires making it read-only, which
allows us to write wired pages without causing all kinds of havoc.
- use a new PG_PAGEOUT flag to indicate that a page should be freed
on behalf of the pagedaemon when it's unlocked. this flag is very similar
to PG_RELEASED, but unlike PG_RELEASED, PG_PAGEOUT can be cleared if the
pageout fails due to eg. an indirect-block buffer being locked.
this allows us to remove the "version" field from struct vm_page,
and together with shrinking "loan_count" from 32 bits to 16,
struct vm_page is now 4 bytes smaller.
- no longer use PG_RELEASED for swap-backed pages. if the page is busy
because it's being paged out, we can't release the swap slot to be
reallocated until that write is complete, but unlike with vnodes we
don't keep a count of in-progress writes so there's no good way to
know when the write is done. instead, when we need to free a busy
swap-backed page, just sleep until we can get it busy ourselves.
- implement a fast-path for extending writes which allows us to avoid
zeroing new pages. this substantially reduces cpu usage.
- encapsulate the data used by the genfs code in a struct genfs_node,
which must be the first element of the filesystem-specific vnode data
for filesystems which use genfs_{get,put}pages().
- eliminate many of the UVM pagerops, since they aren't needed anymore
now that the pager "put" operation is a higher-level operation.
- enhance the genfs code to allow NFS to use the genfs_{get,put}pages
instead of a modified copy.
- clean up struct vnode by removing all the fields that used to be used by
the vfs_cluster.c code (which we don't use anymore with UBC).
- remove kmem_object and mb_object since they were useless.
instead of allocating pages to these objects, we now just allocate
pages with no object. such pages are mapped in the kernel until they
are freed, so we can use the mapping to find the page to free it.
this allows us to remove splvm() protection in several places.
The sum of all these changes improves write throughput on my
decstation 5000/200 to within 1% of the rate of NetBSD 1.5
and reduces the elapsed time for "make release" of a NetBSD 1.5
source tree on my 128MB pc to 10% less than a 1.5 kernel took.
ctor/dtor feature, it's still faster to allocate from the cache groups
than it is from the pool (cache groups are analogous to "magazines"
in the Solaris SLAB allocator).
there are direct use of MFREE() from sys/kern.
(we experienced no memory leak so far, but if we use m_aux for other purposes,
we will need this change)
<vm/pglist.h> -> <uvm/uvm_pglist.h>
<vm/vm_inherit.h> -> <uvm/uvm_inherit.h>
<vm/vm_kern.h> -> into <uvm/uvm_extern.h>
<vm/vm_object.h> -> nothing
<vm/vm_pager.h> -> into <uvm/uvm_pager.h>
also includes a bunch of <vm/vm_page.h> include removals (due to redudancy
with <vm/vm.h>), and a scattering of other similar headers.
between protocol handlers.
ipsec socket pointers, ipsec decryption/auth information, tunnel
decapsulation information are in my mind - there can be several other usage.
at this moment, we use this for ipsec socket pointer passing. this will
avoid reuse of m->m_pkthdr.rcvif in ipsec code.
due to the change, MHLEN will be decreased by sizeof(void *) - for example,
for i386, MHLEN was 100 bytes, but is now 96 bytes.
we may want to increase MSIZE from 128 to 256 for some of our architectures.
take caution if you use it for keeping some data item for long period
of time - use extra caution on M_PREPEND() or m_adj(), as they may result
in loss of m->m_pkthdr.aux pointer (and mbuf leak).
this will bump kernel version.
(as discussed in tech-net, tested in kame tree)
- Initialize mbpool and mclpool with msize and mclbytes, respectively,
so that those values may be patched and have an actual affect on the
next system reboot.
- Set low water marks on mbpool (default: 16) and mclpool (default: 8).
This should be of great help for diskless systems, which need to allocate
mbufs in order to clean dirty pages; the low water marks increase the
chances of this being possible to do in memory starvation situations.
- Add support for getting/setting some mbuf-related parameters via sysctl.
* msize and mclsize (read-only)
* nmbclusters (read-only unless the platform has direct-mapped pool pages,
in which case the value can be increased).
* mblowat and mcllowat (read/write)
NMBCLUSTERS for the mbuf cluster pool. On platforms which use direct-mapped
segments for pool pages (MIPS and Alpha), this makes NMBCLUSTERS actually
meaningful (such ports don't even allocate mb_map, as it is not used to
map mbuf cluster pages).
Improve the message logged at a maximum rate of once per second. The
new message: "WARNING: mclpool limit reached; increase NMBCLUSTERS".
In the back-end pool page allocator, remove the message about mb_map
being full. The message was not necessarily correct as the allocator
may have been starved for pages, rather than for space in the map. Also,
the hard limit on the mbuf cluster pool will be reached before the map
fills (the last cluster will always fit into the map), so the message
is redundant.
Add a comment in mbinit() about considering setting low water marks on
the mbuf and mbuf cluster pools.
date: 1998/08/01 01:47:24; author: thorpej; state: Exp; lines: +18 -8
Don't call the protocol drain routines if how == M_NOWAIT, which typically
means we're in interrupt context. Since we can be called from a network
hardware interrupt, we could corrupt the protocol queues we try to drain
them at that time.
The problem has been addressed by letting the drain'able protocols use
a locking scheme to prevent queue corruption.
means we're in interrupt context. Since we can be called from a network
hardware interrupt, we could corrupt the protocol queues we try to drain
them at that time.
UVM was written by chuck cranor <chuck@maria.wustl.edu>, with some
minor portions derived from the old Mach code. i provided some help
getting swap and paging working, and other bug fixes/ideas. chuck
silvers <chuq@chuq.com> also provided some other fixes.
this is the rest of the MI portion changes.
this will be KNF'd shortly. :-)
external storage. Highlights:
- additional "void *" argument to (*ext_free)(), an opaque
cookie for use by the free function.
- MCLALLOC() and MCLFREE() calls are gone. They are replaced
by MEXTADD() (add external storage to mbuf), MEXTMALLOC()
(malloc() external storage and attach to mbuf), and
MEXTREMOVE() (remove external storage from mbuf).
- completely new external storage reference counting
mechanism; mclrefcnt[] is gone.
These changes will eventually be used to pass driver DMA buffers up
the network stack, and reduce/eliminate copies in certain code paths
(e.g. NFS writes).
From Matt Thomas <matt@3am-software.com> and myself <thorpej@nas.nasa.gov>,
with some input from Chris Demetriou <cgd@cs.cmu.edu> and review by
Charles Hannum <mycroft@mit.edu>.
and free some space by calling m_reclaim(). Also, log the "mb_map full"
error message (at most) every 60-seconds. The old code would log it
once over the lifetime of the system, but that's not a useful diagnostic.
(More useful is the new behaviour, which roughly indicates how often
periods of heavy load occur, without spamming the console and system
logs with messages.)