Commit Graph

63 Commits

Author SHA1 Message Date
thorpej b193480908 Add extensible malloc types, adapted from FreeBSD. This turns
malloc types into a structure, a pointer to which is passed around,
instead of an int constant.  Allow the limit to be adjusted when the
malloc type is defined, or with a function call, as suggested by
Jonathan Stone.
2003-02-01 06:23:35 +00:00
thorpej e5e7fae215 ANSI'ify. 2003-01-31 04:55:52 +00:00
thorpej 71404bb533 Don't include <sys/map.h>. 2002-09-25 22:21:01 +00:00
thorpej 10c252ba47 Changes to allow the IPv4 and IPv6 layers to align headers themseves,
as necessary:
* Implement a new mbuf utility routine, m_copyup(), is is like
  m_pullup(), except that it always prepends and copies, rather
  than only doing so if the desired length is larger than m->m_len.
  m_copyup() also allows an offset into the destination mbuf, which
  allows space for packet headers, in the forwarding case.
* Add *_HDR_ALIGNED_P() macros for IP, IPv6, ICMP, and IGMP.  These
  macros expand to 1 if __NO_STRICT_ALIGNMENT is defined, so that
  architectures which do not have strict alignment constraints don't
  pay for the test or visit the new align-if-needed path.
* Use the new macros to check if a header needs to be aligned, or to
  assert that it already is, as appropriate.

Note: This code is still somewhat experimental.  However, the new
code path won't be visited if individual device drivers continue
to guarantee that packets are delivered to layer 3 already properly
aligned (which are rules that are already in use).
2002-06-30 22:40:32 +00:00
thorpej e21319b482 Make mbpool and mclpool use the new drain hook facaility. Adjust
m_reclaim() to match the drain hook signature.  This allows us to
delete m_retry() and m_retryhdr(), as the pool allocator will now
perform the reclaimation step for us.

From art@openbsd.org.
2002-03-09 01:46:32 +00:00
thorpej a180cee23b Pool deals fairly well with physical memory shortage, but it doesn't
deal with shortages of the VM maps where the backing pages are mapped
(usually kmem_map).  Try to deal with this:

* Group all information about the backend allocator for a pool in a
  separate structure.  The pool references this structure, rather than
  the individual fields.
* Change the pool_init() API accordingly, and adjust all callers.
* Link all pools using the same backend allocator on a list.
* The backend allocator is responsible for waiting for physical memory
  to become available, but will still fail if it cannot callocate KVA
  space for the pages.  If this happens, carefully drain all pools using
  the same backend allocator, so that some KVA space can be freed.
* Change pool_reclaim() to indicate if it actually succeeded in freeing
  some pages, and use that information to make draining easier and more
  efficient.
* Get rid of PR_URGENT.  There was only one use of it, and it could be
  dealt with by the caller.

From art@openbsd.org.
2002-03-08 20:48:27 +00:00
thorpej daaeb3910f const char *mclpool_warnmsg -> const char mclpool_warnmsg[]
Noted by Matt Thomas.
2002-02-12 00:52:33 +00:00
lukem adc783d537 add RCSIDs 2001-11-12 15:25:01 +00:00
simonb 5f717f7c33 Don't need to include <uvm/uvm_extern.h> just to include <sys/sysctl.h>
anymore.
2001-10-29 07:02:30 +00:00
chs 64c6d1d2dc a whole bunch of changes to improve performance and robustness under load:
- remove special treatment of pager_map mappings in pmaps.  this is
   required now, since I've removed the globals that expose the address range.
   pager_map now uses pmap_kenter_pa() instead of pmap_enter(), so there's
   no longer any need to special-case it.
 - eliminate struct uvm_vnode by moving its fields into struct vnode.
 - rewrite the pageout path.  the pager is now responsible for handling the
   high-level requests instead of only getting control after a bunch of work
   has already been done on its behalf.  this will allow us to UBCify LFS,
   which needs tighter control over its pages than other filesystems do.
   writing a page to disk no longer requires making it read-only, which
   allows us to write wired pages without causing all kinds of havoc.
 - use a new PG_PAGEOUT flag to indicate that a page should be freed
   on behalf of the pagedaemon when it's unlocked.  this flag is very similar
   to PG_RELEASED, but unlike PG_RELEASED, PG_PAGEOUT can be cleared if the
   pageout fails due to eg. an indirect-block buffer being locked.
   this allows us to remove the "version" field from struct vm_page,
   and together with shrinking "loan_count" from 32 bits to 16,
   struct vm_page is now 4 bytes smaller.
 - no longer use PG_RELEASED for swap-backed pages.  if the page is busy
   because it's being paged out, we can't release the swap slot to be
   reallocated until that write is complete, but unlike with vnodes we
   don't keep a count of in-progress writes so there's no good way to
   know when the write is done.  instead, when we need to free a busy
   swap-backed page, just sleep until we can get it busy ourselves.
 - implement a fast-path for extending writes which allows us to avoid
   zeroing new pages.  this substantially reduces cpu usage.
 - encapsulate the data used by the genfs code in a struct genfs_node,
   which must be the first element of the filesystem-specific vnode data
   for filesystems which use genfs_{get,put}pages().
 - eliminate many of the UVM pagerops, since they aren't needed anymore
   now that the pager "put" operation is a higher-level operation.
 - enhance the genfs code to allow NFS to use the genfs_{get,put}pages
   instead of a modified copy.
 - clean up struct vnode by removing all the fields that used to be used by
   the vfs_cluster.c code (which we don't use anymore with UBC).
 - remove kmem_object and mb_object since they were useless.
   instead of allocating pages to these objects, we now just allocate
   pages with no object.  such pages are mapped in the kernel until they
   are freed, so we can use the mapping to find the page to free it.
   this allows us to remove splvm() protection in several places.

The sum of all these changes improves write throughput on my
decstation 5000/200 to within 1% of the rate of NetBSD 1.5
and reduces the elapsed time for "make release" of a NetBSD 1.5
source tree on my 128MB pc to 10% less than a 1.5 kernel took.
2001-09-15 20:36:31 +00:00
thorpej fcc2e4e5f6 Use pool_cache_*() for mbufs and clusters. While we don't use the
ctor/dtor feature, it's still faster to allocate from the cache groups
than it is from the pool (cache groups are analogous to "magazines"
in the Solaris SLAB allocator).
2001-07-26 19:05:04 +00:00
thorpej b5104c1ca5 Change some low-hanging splimp() calls to splvm(). 2001-01-14 02:06:21 +00:00
itojun 68f0fe3840 make sure every m_aux will be freed.
there are direct use of MFREE() from sys/kern.
(we experienced no memory leak so far, but if we use m_aux for other purposes,
we will need this change)
2000-11-14 20:05:28 +00:00
itojun f5fa53578a repair m_dup(). specifically, now it is safe against non-MCLBYTES cluster
mbuf.  noone seem to be using this function at this moment.
2000-08-18 16:19:22 +00:00
itojun 243eebc256 disable m_dup(), as it makes false assumption on cluster mbuf and unsafe
(does not do the right thing).
2000-08-18 14:23:48 +00:00
itojun 1905ac079e add a comment about false assumption made by m_dup() 2000-08-18 14:12:47 +00:00
mrg 32aa199ccf remove include of <vm/vm.h> 2000-06-27 17:41:07 +00:00
mrg 2f159a1bac remove/move more mach vm header files:
<vm/pglist.h> -> <uvm/uvm_pglist.h>
	<vm/vm_inherit.h> -> <uvm/uvm_inherit.h>
	<vm/vm_kern.h> -> into <uvm/uvm_extern.h>
	<vm/vm_object.h> -> nothing
	<vm/vm_pager.h> -> into <uvm/uvm_pager.h>

also includes a bunch of <vm/vm_page.h> include removals (due to redudancy
with <vm/vm.h>), and a scattering of other similar headers.
2000-06-26 14:20:25 +00:00
itojun 04ac848d6f introduce m->m_pkthdr.aux to hold random data which needs to be passed
between protocol handlers.

ipsec socket pointers, ipsec decryption/auth information, tunnel
decapsulation information are in my mind - there can be several other usage.
at this moment, we use this for ipsec socket pointer passing.  this will
avoid reuse of m->m_pkthdr.rcvif in ipsec code.

due to the change, MHLEN will be decreased by sizeof(void *) - for example,
for i386, MHLEN was 100 bytes, but is now 96 bytes.
we may want to increase MSIZE from 128 to 256 for some of our architectures.

take caution if you use it for keeping some data item for long period
of time - use extra caution on M_PREPEND() or m_adj(), as they may result
in loss of m->m_pkthdr.aux pointer (and mbuf leak).

this will bump kernel version.

(as discussed in tech-net, tested in kame tree)
2000-03-01 12:49:27 +00:00
itojun b7f47adef9 add mbuf deep-copy fnudtion, m_dup().
NOTE: if you use m_dup(), your additional kernel code can become
incompatible with 4.xBSD or other *BSD.
1999-10-27 14:23:26 +00:00
thorpej 428443a130 Add some more diagnostic information to the 3 different `panic("m_copym")'
calls.
1999-08-05 02:24:29 +00:00
thorpej 3d23eb3ce3 More improvements to mbuf and mbuf cluster allocation:
- Initialize mbpool and mclpool with msize and mclbytes, respectively,
so that those values may be patched and have an actual affect on the
next system reboot.

- Set low water marks on mbpool (default: 16) and mclpool (default: 8).
This should be of great help for diskless systems, which need to allocate
mbufs in order to clean dirty pages; the low water marks increase the
chances of this being possible to do in memory starvation situations.

- Add support for getting/setting some mbuf-related parameters via sysctl.
* msize and mclsize (read-only)
* nmbclusters (read-only unless the platform has direct-mapped pool pages,
in which case the value can be increased).
* mblowat and mcllowat (read/write)
1999-04-26 22:04:28 +00:00
simonb a560bdeeec Use the nmbclusters variable and not the NMBCLUSTERS constant when setting
the mclpool hardlimit.
1999-04-25 03:03:03 +00:00
thorpej 4fd2edfbe8 mbinit() can now allocate memory. Update a comment accordingly. 1999-04-01 00:23:25 +00:00
thorpej 98d006d2c6 Set a hard limit (rather than an advisory high water mark for pages) of
NMBCLUSTERS for the mbuf cluster pool.  On platforms which use direct-mapped
segments for pool pages (MIPS and Alpha), this makes NMBCLUSTERS actually
meaningful (such ports don't even allocate mb_map, as it is not used to
map mbuf cluster pages).

Improve the message logged at a maximum rate of once per second.  The
new message: "WARNING: mclpool limit reached; increase NMBCLUSTERS".

In the back-end pool page allocator, remove the message about mb_map
being full.  The message was not necessarily correct as the allocator
may have been starved for pages, rather than for space in the map.  Also,
the hard limit on the mbuf cluster pool will be reached before the map
fills (the last cluster will always fit into the map), so the message
is redundant.

Add a comment in mbinit() about considering setting low water marks on
the mbuf and mbuf cluster pools.
1999-03-31 01:26:40 +00:00
mrg d2397ac5f7 completely remove Mach VM support. all that is left is the all the
header files as UVM still uses (most of) these.
1999-03-24 05:50:49 +00:00
thorpej 845b609f97 Set the high water mark on the mbuf cluster pool to NMBCLUSTERS. 1999-03-23 02:51:27 +00:00
thorpej f2a91c9b91 Put back the code to log `mb_map full' that was lost when mbuf clusters
were converted to use the pool allocator.
1999-03-22 22:06:58 +00:00
thorpej e598335d1c Garbage-collect `mbutl'. 1999-01-09 22:10:12 +00:00
thorpej 99d93cb85e Garbage-collect `union mcluster' and `mclfree'. 1999-01-09 21:54:07 +00:00
thorpej 489d6d0e46 Reverse the stopgap change made in revision 1.29:
date: 1998/08/01 01:47:24;  author: thorpej;  state: Exp;  lines: +18 -8
Don't call the protocol drain routines if how == M_NOWAIT, which typically
means we're in interrupt context.  Since we can be called from a network
hardware interrupt, we could corrupt the protocol queues we try to drain
them at that time.

The problem has been addressed by letting the drain'able protocols use
a locking scheme to prevent queue corruption.
1998-12-18 21:40:14 +00:00
thorpej 77d0a69569 Add a waitok boolean argument to the VM system's pool page allocator backend. 1998-08-28 20:05:48 +00:00
thorpej 09efdbb42d Oops, this got missed in the vm_offset_t -> vaddr_t change. 1998-08-13 19:15:33 +00:00
perry 275d1554aa Abolition of bcopy, ovbcopy, bcmp, and bzero, phase one.
bcopy(x, y, z) ->  memcpy(y, x, z)
ovbcopy(x, y, z) -> memmove(y, x, z)
   bcmp(x, y, z) ->  memcmp(x, y, z)
  bzero(x, y)    ->  memset(x, 0, y)
1998-08-04 04:03:10 +00:00
thorpej 3c658f1f41 Don't call the protocol drain routines if how == M_NOWAIT, which typically
means we're in interrupt context.  Since we can be called from a network
hardware interrupt, we could corrupt the protocol queues we try to drain
them at that time.
1998-08-01 01:47:24 +00:00
thorpej e7521693c1 Use the pool allocator for mbufs and mbufs clusters (two pools, one for
each).  Partially from pk@netbsd.org.
1998-08-01 01:35:20 +00:00
matt b2c24dbcbe Add an if_drain to the ifnet structure (call when the system is low
on mbufs).  Add code to m_reclaim to call if_drain in each ifnet
that has one set.  Remove register from declarations.
1998-05-22 17:47:21 +00:00
fvdl e5bc90f40c Merge with Lite2 + local changes 1998-03-01 02:20:01 +00:00
kleink 0dc9b5452d Fix variable declarations: register -> register int. 1998-02-12 20:39:41 +00:00
mrg d90485202c - add defopt's for UVM, UVMHIST and PMAP_NEW.
- remove unnecessary UVMHIST_DECL's.
1998-02-10 14:08:44 +00:00
mrg 1a8c7604f4 initial import of the new virtual memory system, UVM, into -current.
UVM was written by chuck cranor <chuck@maria.wustl.edu>, with some
minor portions derived from the old Mach code.  i provided some help
getting swap and paging working, and other bug fixes/ideas.  chuck
silvers <chuq@chuq.com> also provided some other fixes.

this is the rest of the MI portion changes.

this will be KNF'd shortly.  :-)
1998-02-05 07:59:28 +00:00
thorpej e78682a0e2 In m_split(), restore m_pkthdr.len if an error occurs. From Koji Imada,
PR #3986.
1997-11-20 04:28:18 +00:00
pk f910dab2bd Get `canwait' argument to kmem_malloc() right. 1997-06-06 10:51:49 +00:00
mycroft 103c7d360d Oops; forgot to GC the last mbuf allocated when out of clusters. 1997-04-28 17:03:58 +00:00
mycroft 9da4efe896 If we fail to allocate a cluster to hold a large packet, simply
drop it rather than using a chain of tiny mbufs.
1997-04-24 08:14:04 +00:00
thorpej 2a4b742e6a Update and enhancement to the mbuf code, to support use of non-cluster
external storage.  Highlights:

	- additional "void *" argument to (*ext_free)(), an opaque
	  cookie for use by the free function.
	- MCLALLOC() and MCLFREE() calls are gone.  They are replaced
	  by MEXTADD() (add external storage to mbuf), MEXTMALLOC()
	  (malloc() external storage and attach to mbuf), and
	  MEXTREMOVE() (remove external storage from mbuf).
	- completely new external storage reference counting
	  mechanism; mclrefcnt[] is gone.

These changes will eventually be used to pass driver DMA buffers up
the network stack, and reduce/eliminate copies in certain code paths
(e.g. NFS writes).

From Matt Thomas <matt@3am-software.com> and myself <thorpej@nas.nasa.gov>,
with some input from Chris Demetriou <cgd@cs.cmu.edu> and review by
Charles Hannum <mycroft@mit.edu>.
1997-03-27 20:33:07 +00:00
gwr 6dba055937 Move `static' to the beginning of the storage class specifiers. 1996-12-18 20:24:50 +00:00
cgd 1abc77f86d if kmem_malloc() fails while trying to allocate an mbuf cluster, try
and free some space by calling m_reclaim().  Also, log the "mb_map full"
error message (at most) every 60-seconds.  The old code would log it
once over the lifetime of the system, but that's not a useful diagnostic.
(More useful is the new behaviour, which roughly indicates how often
periods of heavy load occur, without spamming the console and system
logs with messages.)
1996-06-13 17:02:23 +00:00
christos 09afd77655 More proto fixes 1996-02-09 18:59:18 +00:00
christos e630447d8c First pass at prototyping 1996-02-04 02:17:43 +00:00