Commit Graph

75 Commits

Author SHA1 Message Date
christos 4d595fd7b1 - sprinkle __unused on function decls.
- fix a couple of unused bugs
- no more -Wno-unused for i386
2006-10-12 01:30:41 +00:00
yamt d447115889 make amap use kmem_alloc, rather than malloc.
(ie. make it use kernel_map, rather than kmem_map.)
kmem_map is more restricted than kernel_map,
and there's no point for amap to use it.
2006-06-25 08:03:46 +00:00
yamt 93127a7b4c amap_splitref: assert that origref->ar_amap is initialized
by caller beforehand.
2006-04-21 14:04:45 +00:00
yamt 9040ed946b - amap_copy: take a "flags" argument instead of booleans.
- add AMAP_COPY_NOMERGE flag, and use it for uvm_map_extract.
  PR/32806 from Julio M. Merino Vidal.
2006-02-15 14:06:45 +00:00
yamt a3af4c1530 remove the following options. no objections on tech-kern@.
UVM_PAGER_INLINE
	UVM_AMAP_INLINE
	UVM_PAGE_INLINE
	UVM_MAP_INLINE
2006-02-11 12:45:07 +00:00
yamt 651bed2a01 - uvm_fault: move a common code of 1B and 2B to a new function.
don't attempt to allocate anons with kernel_map locked.  PR/32543.
- amap_copy: add an assertion.
2006-01-21 13:13:07 +00:00
chs 5570661cd8 in amap_alloc(), only put the amap on the list of amaps if we succeeded
in allocating it.
2006-01-18 17:03:36 +00:00
perry 0f0296d88a Remove leading __ from __(const|inline|signed|volatile) -- it is obsolete. 2005-12-24 20:45:08 +00:00
christos 95e1ffb156 merge ktrace-lwp. 2005-12-11 12:16:03 +00:00
chs 5332333501 in amap_cow_now(), handle the case where we have to sleep and some of the
already-copied pages are paged out.  anons that have already been copied
will have refcount == 1, whereas anons that still need to be copied will
have refcount > 1.  fixes PR 25392, PR 30257, PR 31924.
2005-11-06 15:57:32 +00:00
yamt 6fbf5bf6f1 wrap swap related code by #ifdef VMSWAP. always #define VMSWAP for now. 2005-09-13 22:00:05 +00:00
yamt 38ca5312d2 revert "defflag VMSWAP" changes for now.
there seems to be far more people who don't want to edit
their kernel config files than i thought.
2005-07-31 04:04:30 +00:00
yamt 1d0891101c defflag VMSWAP. 2005-07-30 06:33:33 +00:00
thorpej e569facced Use ANSI function decls. 2005-06-27 02:19:48 +00:00
yamt 50a25acc50 (try to) merge map entries in fault handler. 2005-05-17 13:55:33 +00:00
yamt 662ada8f7a allocate anons on-demand, rather than reserving static amount of
them on boot/swapon.
2005-05-11 13:02:25 +00:00
yamt ae24d5d705 - amap_extend: don't extend amap beyond UVM_AMAP_LARGE.
- uvm_map_enter: if we fail to extend amap, just give up merging instead of
  bailing out immediately.
2005-05-05 01:58:51 +00:00
yamt a7cd6f7a0e amap_wipeout: remove a comment which is no longer true.
despite of what comment said, i left preempt() call
because i don't think of any bad effects.
2005-04-06 13:58:40 +00:00
chs 6390d0aeca hack around a UVM problem that causes hangs when large processes fork.
see PR 26908 for details.
2005-01-30 17:23:05 +00:00
yamt 1207308b90 for in-kernel maps,
- allocate kva for vm_map_entry from the map itsself and
  remove the static limit, MAX_KMAPENT.
- keep merged entries for later splitting to fix allocate-to-free problem.
  PR/24039.
2005-01-01 21:00:06 +00:00
yamt 5469c2b7c1 add assertions. 2004-05-12 20:09:50 +00:00
simonb b5d0e6bf06 Initialise (most) pools from a link set instead of explicit calls
to pool_init.  Untouched pools are ones that either in arch-specific
code, or aren't initialiased during initial system startup.

 Convert struct session, ucred and lockf to pools.
2004-04-25 16:42:40 +00:00
junyoung 1e2b269ded - Nuke __P().
- Drop trailing spaces.
2004-03-24 07:50:48 +00:00
thorpej b193480908 Add extensible malloc types, adapted from FreeBSD. This turns
malloc types into a structure, a pointer to which is passed around,
instead of an int constant.  Allow the limit to be adjusted when the
malloc type is defined, or with a function call, as suggested by
Jonathan Stone.
2003-02-01 06:23:35 +00:00
pk ac1bea60c1 amap_copy: remove stray amap_unlock(). 2003-01-27 22:14:48 +00:00
thorpej b78f59b443 Merge the nathanw_sa branch. 2003-01-18 08:51:40 +00:00
atatat 84a6247a30 Properly set page references counts at the start of the newly
allocated ppref data to zero in the case of an amap that has empty
space at the front.

Don't set anything in the ppref array if "len" is zero.

Many thanks to Sami Kantoluoto for providing gdb access to a machine
that would reliably crash with problems related to the above, and to
Stephan Thesing for corroborating that the patch properly addressed
the problem.

Note that the ar_pageoff (and related variables) types must be changed
soon.  The use of "int" here is not theoretically sufficient.
2002-12-20 18:21:13 +00:00
bouyer d986226518 Change uvm_km_kmemalloc() to accept flag UVM_KMF_NOWAIT and pass it to
uvm_map(). Change uvm_map() to honnor UVM_KMF_NOWAIT. For this, change
amap_extend() to take a flags parameter instead of just boolean for
direction, and introduce AMAP_EXTEND_FORWARDS and AMAP_EXTEND_NOWAIT flags
(AMAP_EXTEND_BACKWARDS is still defined as 0x0, to keep the code easier to
read).
Add a flag parameter to uvm_mapent_alloc().
This solves a problem a pool_get(PR_NOWAIT) could trigger a pool_get(PR_WAITOK)
in uvm_mapent_alloc().
Thanks to Chuck Silvers, enami tsugutomo, Andrew Brown and Jason R Thorpe
for feedback.
2002-11-30 18:28:04 +00:00
atatat 966f9caaed Properly free "newppref", instead of "amap->am_ppref" (oops), and
delay freeing the old am_ppref so that if we bail early due to
malloc() failures, valid ppref data hasn't been freed for no reason.

Based on comments from enami.
2002-11-15 17:30:35 +00:00
atatat 42c2fe641b Implement backwards extension of amaps. There are three cases to deal
with:

Case #1 -- adjust offset: The slot offset in the aref can be
decremented to cover the required size addition.

Case #2 -- move pages and adjust offset: The slot offset is not large
enough, but the amap contains enough inactive space *after* the mapped
pages to make up the difference, so active slots are slid to the "end"
of the amap, and the slot offset is, again, adjusted to cover the
required size addition.  This optimizes for hitting case #1 again on
the next small extension.

Case #3 -- reallocate, move pages, and adjust offset: There is not
enough inactive space in the amap, so the arrays are reallocated, and
the active pages are copied again to the "end" of the amap, and the
slot offset is adjusted to cover the required size.  This also
optimizes for hitting case #1 on the next backwards extension.

This provides the missing piece in the "forward extension of
vm_map_entries" logic, so the merge failure counters have been
removed.

Not many applications will make any use of this at this time (except
for jvms and perhaps gcc3), but a "top-down" memory allocator will use
it extensively.
2002-11-14 17:58:48 +00:00
chs 9672ac098f add a new km flag UVM_KMF_CANFAIL, which causes uvm_km_kmemalloc() to
return failure if swap is full and there are no free physical pages.
have malloc() use this flag if M_CANFAIL is passed to it.
use M_CANFAIL to allow amap_extend() to fail when memory is scarce.
this should prevent most of the remaining hangs in low-memory situations.
2002-09-15 16:54:26 +00:00
chs cfefc92864 rearrange a few lines to appease an assertion. 2002-06-29 18:27:30 +00:00
nathanw a1be32226e In amap_pp_adjref(), avoid incorrectly merging the first two chunks in
a ppref array when the range being adjusted includes the beginning of
the array.
2002-03-28 06:06:29 +00:00
thorpej a180cee23b Pool deals fairly well with physical memory shortage, but it doesn't
deal with shortages of the VM maps where the backing pages are mapped
(usually kmem_map).  Try to deal with this:

* Group all information about the backend allocator for a pool in a
  separate structure.  The pool references this structure, rather than
  the individual fields.
* Change the pool_init() API accordingly, and adjust all callers.
* Link all pools using the same backend allocator on a list.
* The backend allocator is responsible for waiting for physical memory
  to become available, but will still fail if it cannot callocate KVA
  space for the pages.  If this happens, carefully drain all pools using
  the same backend allocator, so that some KVA space can be freed.
* Change pool_reclaim() to indicate if it actually succeeded in freeing
  some pages, and use that information to make draining easier and more
  efficient.
* Get rid of PR_URGENT.  There was only one use of it, and it could be
  dealt with by the caller.

From art@openbsd.org.
2002-03-08 20:48:27 +00:00
chs 811c8fad2b in amap_pp_adjref(), avoid unnecessary fragmentation of the am_ppref array
by merging the first changed chunk with the last unchanged chunk if possible.
2002-02-25 00:39:16 +00:00
enami 76858f7620 When initially allocating or extending arrays in struct uvm_amap,
adjust allocation size using malloc_roundup().  This eliminates many
unnecessary malloc/memcpy calls.
2001-12-05 01:33:09 +00:00
enami fbfa7f8e61 No need to zero clear after amap->am_bckptr[amap->am_nslot], since we're
clearing corresponding elements in an array amap->am_anon[].
2001-12-05 00:34:05 +00:00
chuck 00168f4ce0 fix bug in amap_wiperange() detected by enami tsugutomo.
loop control was wrong in one case.
2001-12-01 22:11:13 +00:00
lukem b616d1ca1d add RCSIDs, and in some cases, slightly cleanup #include order 2001-11-10 07:36:59 +00:00
simonb 82649768b7 Change some unsigned int variables and parameters to plain ints so
that all usages of those agree on unsigned vs. signed.
2001-11-06 06:31:06 +00:00
chs 20a658f0ab work around swap-space/extent performance problem which causes
long pauses when processes with lots of swapped-out pages exit.
2001-09-19 03:41:46 +00:00
chs 64c6d1d2dc a whole bunch of changes to improve performance and robustness under load:
- remove special treatment of pager_map mappings in pmaps.  this is
   required now, since I've removed the globals that expose the address range.
   pager_map now uses pmap_kenter_pa() instead of pmap_enter(), so there's
   no longer any need to special-case it.
 - eliminate struct uvm_vnode by moving its fields into struct vnode.
 - rewrite the pageout path.  the pager is now responsible for handling the
   high-level requests instead of only getting control after a bunch of work
   has already been done on its behalf.  this will allow us to UBCify LFS,
   which needs tighter control over its pages than other filesystems do.
   writing a page to disk no longer requires making it read-only, which
   allows us to write wired pages without causing all kinds of havoc.
 - use a new PG_PAGEOUT flag to indicate that a page should be freed
   on behalf of the pagedaemon when it's unlocked.  this flag is very similar
   to PG_RELEASED, but unlike PG_RELEASED, PG_PAGEOUT can be cleared if the
   pageout fails due to eg. an indirect-block buffer being locked.
   this allows us to remove the "version" field from struct vm_page,
   and together with shrinking "loan_count" from 32 bits to 16,
   struct vm_page is now 4 bytes smaller.
 - no longer use PG_RELEASED for swap-backed pages.  if the page is busy
   because it's being paged out, we can't release the swap slot to be
   reallocated until that write is complete, but unlike with vnodes we
   don't keep a count of in-progress writes so there's no good way to
   know when the write is done.  instead, when we need to free a busy
   swap-backed page, just sleep until we can get it busy ourselves.
 - implement a fast-path for extending writes which allows us to avoid
   zeroing new pages.  this substantially reduces cpu usage.
 - encapsulate the data used by the genfs code in a struct genfs_node,
   which must be the first element of the filesystem-specific vnode data
   for filesystems which use genfs_{get,put}pages().
 - eliminate many of the UVM pagerops, since they aren't needed anymore
   now that the pager "put" operation is a higher-level operation.
 - enhance the genfs code to allow NFS to use the genfs_{get,put}pages
   instead of a modified copy.
 - clean up struct vnode by removing all the fields that used to be used by
   the vfs_cluster.c code (which we don't use anymore with UBC).
 - remove kmem_object and mb_object since they were useless.
   instead of allocating pages to these objects, we now just allocate
   pages with no object.  such pages are mapped in the kernel until they
   are freed, so we can use the mapping to find the page to free it.
   this allows us to remove splvm() protection in several places.

The sum of all these changes improves write throughput on my
decstation 5000/200 to within 1% of the rate of NetBSD 1.5
and reduces the elapsed time for "make release" of a NetBSD 1.5
source tree on my 128MB pc to 10% less than a 1.5 kernel took.
2001-09-15 20:36:31 +00:00
wiz a9356936b4 seperate -> separate 2001-07-22 13:33:58 +00:00
chs 821ec03ed9 replace vm_map{,_entry}_t with struct vm_map{,_entry} *. 2001-06-02 18:09:08 +00:00
chs 3845302904 remove trailing whitespace. 2001-05-25 04:06:11 +00:00
chs 19b7b64642 clean up DIAGNOSTIC checks, use KASSERT(). 2001-02-18 21:19:08 +00:00
thorpej ad7259d7c6 Change uvm_analloc() to return a locked anon, update all callers,
and fix an anon locking protocol error in uvm_loanzero().
2001-01-23 02:27:39 +00:00
thorpej 13759f5310 Sprinkle some assertions:
amap_free(): Assert that the amap is locked.
amap_share_protect(): Assert that the amap is locked.
amap_wipeout(): Assert that the amap is locked.
uvm_anfree(): Assert that the anon has a reference count of 0 and is
              not locked.
uvm_anon_lockloanpg(): Assert that the anon is locked.
anon_pagein(): Assert that the anon is locked.
uvmfault_anonget(): Assert that the anon is locked.
uvm_pagealloc_strat(): Assert that the uobj or the anon is locked

And fix the problems these have uncovered:
amap_cow_now(): Lock the new anon after allocating it, and unref and
                unlock it (rather than lock!) before freeing it in case
                of an error condition.  This should fix a problem reported
		by Dan Carosone using cdrecord on an i386 MP kernel.
uvm_fault(): Case1B -- Lock the new anon afer allocating it, and unlock
             it later when we unlock the old anon.
	     Case2 -- Lock the new anon after allocating it, and unlock
	     it later by passing it to uvmfault_unlockall() (we set anon
	     to NULL if we're not doing a promote fault).
2001-01-23 01:56:16 +00:00
chs 2ed28d2c7a lots of cleanup:
use queue.h macros and KASSERT().
address amap offsets in pages instead of bytes.
make amap_ref() and amap_unref() take an amap, offset and length
  instead of a vm_map_entry_t.
improve whitespace and comments.
2000-11-25 06:27:59 +00:00
thorpej d3a2c5d0f9 MALLOC()/FREE() are not to be used for variable size allocations. 2000-08-03 00:47:02 +00:00