Commit Graph

101 Commits

Author SHA1 Message Date
yamt
6fbf5bf6f1 wrap swap related code by #ifdef VMSWAP. always #define VMSWAP for now. 2005-09-13 22:00:05 +00:00
yamt
38ca5312d2 revert "defflag VMSWAP" changes for now.
there seems to be far more people who don't want to edit
their kernel config files than i thought.
2005-07-31 04:04:30 +00:00
yamt
1d0891101c defflag VMSWAP. 2005-07-30 06:33:33 +00:00
yamt
b7bfe82866 update file timestamps for nfsd loaned-read and mmap.
PR/25279.  discussed on tech-kern@.
2005-07-23 12:18:41 +00:00
yamt
62cab9eab7 uvm_fault: check a correct object in the case of layered filesystems.
fix PR/30811 from Jukka Salmi.
2005-07-22 14:57:39 +00:00
yamt
8af42d8d3c ensure that vnodes with dirty pages are always on syncer's queue.
- genfs_putpages: wait for i/o completion of PG_RELEASED/PG_PAGEOUT pages by
  setting "wasclean" false when encountering them.
  suggested by Stephan Uphoff in PR/24596 (1).

- genfs_putpages: write protect pages when cleaning out, if
  we're going to take the vnode off the syncer's queue.
  uvm_fault: don't write-map pages unless its vnode is already on
  the syncer's queue.

  fix PR/24596 (3) but in the different way from the suggested fix.
  (to keep our current behaviour, ie. not to require explicit msync.
  discussed on tech-kern@.)

- genfs_putpages: don't mistakenly take a vnode off the queue
  by introducing a generation number in genfs_node.
  genfs_getpages: increment the generation number.
  suggested by Stephan Uphoff in PR/24596 (2).

- add some assertions.
2005-07-17 12:27:47 +00:00
thorpej
e569facced Use ANSI function decls. 2005-06-27 02:19:48 +00:00
yamt
662ada8f7a allocate anons on-demand, rather than reserving static amount of
them on boot/swapon.
2005-05-11 13:02:25 +00:00
yamt
7f37519f2c uvmfault_anonget: check uvm_reclaimable() where appropriate. 2005-04-27 15:19:17 +00:00
yamt
01c07ef7bd fix unreasonably frequent "killed: out of swap" on systems which have
little or no swap.
- even on a severe swap shortage, if we have some amount of file-backed pages,
  don't bother to kill processes.
- if all pages in queue will be likely reactivated, just give up
  page type balancing rather than spinning unnecessarily.
2005-04-12 13:11:45 +00:00
chs
64d6a08278 use TRUE and FALSE instead of 1 and 0 for boolean_t. 2005-02-28 15:33:04 +00:00
yamt
9fe116ce3a uvm_fault: fix integer overflow so that MADV_SEQUENTIAL
can work on large files.
2005-02-07 11:57:38 +00:00
yamt
c4d95d51f4 uvm_fault: pass NULL pap to pmap_extract where we don't need paddr. 2005-01-01 09:14:49 +00:00
yamt
8368dac6a2 fix a amap_wirerange deadlock problem by re-introducing
PG_RELEASED for anon pages.  PR/23171 from Christian Limpach.
for details, see discussion filed in the PR database.

uvm_anon_release: a new function to free anon-owned PG_RELEASED page.
uvm_anfree: we can't wait for the page here because the caller might hold
	amap lock.  instead, just mark the page as PG_RELEASED.
	who unbusy the page should check the PG_RELEASED.
uvm_aio_aiodone: uvm_anon_release() instead of uvm_page_unbusy()
	if appropriate.
uvmfault_anonget: check PG_RELEASED.
2004-05-05 11:54:32 +00:00
junyoung
325f5482a8 Nuke __P(). 2004-03-24 07:55:01 +00:00
yamt
9359a18b6a uvm_fault: check loan_count of neighborhood object page properly.
PR/24595 from Stephan Uphoff.
2004-03-02 11:43:44 +00:00
dbj
c32c758d2d s/fauling/faulting/ 2004-02-10 00:40:06 +00:00
pk
3bef941831 Make sure to call uvm_swap_free() and uvm_swap_markbad() with valid (i.e.
positive) slot numbers.
2003-08-11 16:44:35 +00:00
pk
5869d91cb9 Introduce uvm_swapisfull(), which computes the available swap space by
taking into account swap devices that are in the process of being removed.
2003-08-11 16:33:30 +00:00
yamt
719bd826c5 use uvm_loanbreak in uvm_fault. 2003-05-03 17:57:50 +00:00
pk
c7cbbfeead uvm_fault: case 1B: lock page queue before calling uvm_pageactivate(). 2003-02-09 22:32:21 +00:00
thorpej
b78f59b443 Merge the nathanw_sa branch. 2003-01-18 08:51:40 +00:00
yamt
3a7bfaf54e change "uoff" to voff_t from vaddr_t as it's offset within uvm object.
fix PR/18855.
2002-10-30 05:24:33 +00:00
thorpej
ae8d1b60df When breaking an loan due to a page fault, check to see if the other
kind of reference-holder (anon or object) is referencing the page.  If
not, then the page must be removed from the pageq's.

Reviewed by Chuck Silvers.
2002-09-02 21:09:50 +00:00
chs
d510857ed4 be sure that the page we allocate to break a loan is put on a paging queue.
fixes PR 18037.
2002-08-29 05:03:30 +00:00
chs
76cacb8710 when processing PG_RDONLY, mask off VM_PROT_WRITE instead of hard-wiring
VM_PROT_READ (since we might have VM_PROT_EXEC too).  this fixes problems
running binaries out of NFS on macppc.  yet another fix courtesy of enami.
2002-03-25 01:56:48 +00:00
chs
87185156fd a vm_prot_t is a bit-mask, fix an assertion which was treating one
more like an enumerated type.
2002-03-09 04:29:03 +00:00
chs
e9a82c88ce in uvm_fault_unwire_locked(), if we find that a pmap entry is missing,
just skip that page.  this situation can arise legitimately when a file
with a wired mapping is truncated so that a wired page is no longer
part of the file.
2002-01-02 01:10:36 +00:00
chs
a7ec5b4144 redo part of the last commit. 2002-01-01 22:18:39 +00:00
chs
43973be0c5 introduce a new UVM fault type, VM_FAULT_WIREMAX. this is different
from VM_FAULT_WIRE in that when the pages being wired are faulted in,
the simulated fault is at the maximum protection allowed for the mapping
instead of the current protection.  use this in uvm_map_pageable{,_all}()
to fix the problem where writing via ptrace() to shared libraries that
are also mapped with wired mappings in another process causes a
diagnostic panic when the wired mapping is removed.

this is a really obscure problem so it deserves some more explanation.
ptrace() writing to another process ends up down in uvm_map_extract(),
which for MAP_PRIVATE mappings (such as shared libraries) will cause
the amap to be copied or created.  then the amap is made shared
(ie. the AMAP_SHARED flag is set) between the kernel and the ptrace()d
process so that the kernel can modify pages in the amap and have the
ptrace()d process see the changes.  then when the page being modified
is actually faulted on, the object pages (from the shared library vnode)
is copied to a new anon page and inserted into the shared amap.
to make all the processes sharing the amap actually see the new anon
page instead of the vnode page that was there before, we need to
invalidate all the pmap-level mappings of the vnode page in the pmaps
of the processes sharing the amap, but we don't have a good way of
doing this.  the amap doesn't keep track of the vm_maps which map it.
so all we can do at this point is to remove all the mappings of the
page with pmap_page_protect(), but this has the unfortunate side-effect
of removing wired mappings as well.  removing wired mappings with
pmap_page_protect() is a legitimate operation, it can happen when a file
with a wired mapping is truncated.  so the pmap has no way of knowing
whether a request to remove a wired mapping is normal or when it's due to
this weird situation.  so the pmap has to remove the weird mapping.
the process being ptrace()d goes away and life continues.  then,
much later when we go to unwire or remove the wired vm_map mapping,
we discover that the pmap mapping has been removed when it should
still be there, and we panic.

so where did we go wrong?  the problem is that we don't have any way
to update just the pmap mappings that need to be updated in this
scenario.  we could invent a mechanism to do this, but that is much
more complicated than this change and it doesn't seem like the right
way to go in the long run either.

the real underlying problem here is that wired pmap mappings just
aren't a good concept.  one of the original properties of the pmap
design was supposed to be that all the information in the pmap could
be thrown away at any time and the VM system could regenerate it all
through fault processing, but wired pmap mappings don't allow that.
a better design for UVM would not require wired pmap mappings,
and Chuck C. and I are talking about this, but it won't be done
anytime soon, so this change will do for now.

this change has the effect of causing MAP_PRIVATE mappings to be
copied to anonymous memory when they are mlock()d, so that uvm_fault()
doesn't need to copy these pages later when called from ptrace(), thus
avoiding the call to pmap_page_protect() and the panic that results
from this when the mlock()d region is unlocked or freed.  note that
this change doesn't help the case where the wired mapping is MAP_SHARED.

discussed at great length with Chuck Cranor.
fixes PRs 10363, 12554, 12604, 13041, 13487, 14580 and 14853.
2001-12-31 22:34:39 +00:00
lukem
b616d1ca1d add RCSIDs, and in some cases, slightly cleanup #include order 2001-11-10 07:36:59 +00:00
chs
3aea6d69ad skip the MADV_SEQUENTIAL processing if we refault. fixes PR 14060. 2001-10-03 05:17:58 +00:00
chs
64c6d1d2dc a whole bunch of changes to improve performance and robustness under load:
- remove special treatment of pager_map mappings in pmaps.  this is
   required now, since I've removed the globals that expose the address range.
   pager_map now uses pmap_kenter_pa() instead of pmap_enter(), so there's
   no longer any need to special-case it.
 - eliminate struct uvm_vnode by moving its fields into struct vnode.
 - rewrite the pageout path.  the pager is now responsible for handling the
   high-level requests instead of only getting control after a bunch of work
   has already been done on its behalf.  this will allow us to UBCify LFS,
   which needs tighter control over its pages than other filesystems do.
   writing a page to disk no longer requires making it read-only, which
   allows us to write wired pages without causing all kinds of havoc.
 - use a new PG_PAGEOUT flag to indicate that a page should be freed
   on behalf of the pagedaemon when it's unlocked.  this flag is very similar
   to PG_RELEASED, but unlike PG_RELEASED, PG_PAGEOUT can be cleared if the
   pageout fails due to eg. an indirect-block buffer being locked.
   this allows us to remove the "version" field from struct vm_page,
   and together with shrinking "loan_count" from 32 bits to 16,
   struct vm_page is now 4 bytes smaller.
 - no longer use PG_RELEASED for swap-backed pages.  if the page is busy
   because it's being paged out, we can't release the swap slot to be
   reallocated until that write is complete, but unlike with vnodes we
   don't keep a count of in-progress writes so there's no good way to
   know when the write is done.  instead, when we need to free a busy
   swap-backed page, just sleep until we can get it busy ourselves.
 - implement a fast-path for extending writes which allows us to avoid
   zeroing new pages.  this substantially reduces cpu usage.
 - encapsulate the data used by the genfs code in a struct genfs_node,
   which must be the first element of the filesystem-specific vnode data
   for filesystems which use genfs_{get,put}pages().
 - eliminate many of the UVM pagerops, since they aren't needed anymore
   now that the pager "put" operation is a higher-level operation.
 - enhance the genfs code to allow NFS to use the genfs_{get,put}pages
   instead of a modified copy.
 - clean up struct vnode by removing all the fields that used to be used by
   the vfs_cluster.c code (which we don't use anymore with UBC).
 - remove kmem_object and mb_object since they were useless.
   instead of allocating pages to these objects, we now just allocate
   pages with no object.  such pages are mapped in the kernel until they
   are freed, so we can use the mapping to find the page to free it.
   this allows us to remove splvm() protection in several places.

The sum of all these changes improves write throughput on my
decstation 5000/200 to within 1% of the rate of NetBSD 1.5
and reduces the elapsed time for "make release" of a NetBSD 1.5
source tree on my 128MB pc to 10% less than a 1.5 kernel took.
2001-09-15 20:36:31 +00:00
chris
0e7661f023 Update pmap_update to now take the updated pmap as an argument.
This will allow improvements to the pmaps so that they can more easily defer expensive operations, eg tlb/cache flush, til the last possible moment.

Currently this is a no-op on most platforms, so they should see no difference.

Reviewed by Jason.
2001-09-10 21:19:08 +00:00
thorpej
a279b0973b Reduce some complexity in the fault path -- Rather than maintaining
an spl-protected "interrupt safe map" list, simply require that callers
of uvm_fault() never call us in interrupt context (MD code must make
the assertion), and check for interrupt-safe maps in uvmfault_lookup()
before we lock the map.
2001-06-26 17:55:14 +00:00
thorpej
78ae4127bb Note that uvm_fault() must NEVER EVER EVER be called in interrupt
context.
2001-06-26 17:27:31 +00:00
chs
7e00a527ea work around an overflow problem in uvm_fault_wire().
from Eduardo Horvath and Simon Burge.
2001-06-14 05:12:56 +00:00
chs
821ec03ed9 replace vm_map{,_entry}_t with struct vm_map{,_entry} *. 2001-06-02 18:09:08 +00:00
chs
3845302904 remove trailing whitespace. 2001-05-25 04:06:11 +00:00
thorpej
773ed79e5b Add a comment describing a problem. 2001-04-25 14:59:44 +00:00
thorpej
1c3a62e066 Sprinkle pmap_update() calls after calls to:
- pmap_enter()
- pmap_remove()
- pmap_protect()
- pmap_kenter_pa()
- pmap_kremove()
as described in pmap(9).

These calls are relatively conservative.  It may be possible to
optimize these a little more.
2001-04-24 04:30:50 +00:00
chs
088989a557 undo the part of a previous commit which turned a check for faulting
on an "intrsafe" map into a KASSERT.  this situation can be caused by
an application accessing /dev/kmem.
2001-04-01 16:45:53 +00:00
chs
edb041f0d1 return the real error from pgo_fault(). 2001-03-17 04:01:24 +00:00
chs
ac3bc537bd eliminate the KERN_* error codes in favor of the traditional E* codes.
the mapping is:

KERN_SUCCESS			0
KERN_INVALID_ADDRESS		EFAULT
KERN_PROTECTION_FAILURE		EACCES
KERN_NO_SPACE			ENOMEM
KERN_INVALID_ARGUMENT		EINVAL
KERN_FAILURE			various, mostly turn into KASSERTs
KERN_RESOURCE_SHORTAGE		ENOMEM
KERN_NOT_RECEIVER		<unused>
KERN_NO_ACCESS			<unused>
KERN_PAGES_LOCKED		<unused>
2001-03-15 06:10:32 +00:00
chs
dd82ad8e2c eliminate the VM_PAGER_* error codes in favor of the traditional E* codes.
the mapping is:

VM_PAGER_OK		        0
VM_PAGER_BAD		        <unused>
VM_PAGER_FAIL		        <unused>
VM_PAGER_PEND		        0 (see below)
VM_PAGER_ERROR		        EIO
VM_PAGER_AGAIN		        EAGAIN
VM_PAGER_UNLOCK		        EBUSY
VM_PAGER_REFAULT	        ERESTART

for async i/o requests, it used to be possible for the request to
be convert to sync, and the pager would return VM_PAGER_OK or VM_PAGER_PEND
to indicate whether the caller should perform post-i/o cleanup.
this is no longer allowed; pagers must now return 0 to indicate that
the async i/o was successfully started, and the caller never needs to
worry about doing the post-i/o cleanup.
2001-03-10 22:46:45 +00:00
chs
19b7b64642 clean up DIAGNOSTIC checks, use KASSERT(). 2001-02-18 21:19:08 +00:00
thorpej
1779f8f71b Page scanner improvements, behavior is actually a bit more like
Mach VM's now.  Specific changes:
- Pages now need not have all of their mappings removed before being
  put on the inactive list.  They only need to have the "referenced"
  attribute cleared.  This makes putting pages onto the inactive list
  much more efficient.  In order to eliminate redundant clearings of
  "refrenced", callers of uvm_pagedeactivate() must now do this
  themselves.
- When checking the "modified" attribute for a page (for clearing
  PG_CLEAN), make sure to only do it if PG_CLEAN is currently set on
  the page (saves a potentially expensive pmap operation).
- When scanning the inactive list, if a page is referenced, reactivate
  it (this part was actually added in uvm_pdaemon.c,v 1.27).  This
  now works properly now that pages on the inactive list are allowed to
  have mappings.
- When scanning the inactive list and considering a page for freeing,
  remove all mappings, and then check the "modified" attribute if the
  page is marked PG_CLEAN.
- When scanning the active list, if the page was referenced since its
  last sweep by the scanner, don't deactivate it.  (This part was
  actually added in uvm_pdaemon.c,v 1.28.)

These changes greatly improve interactive performance during
moderate to high memory and I/O load.
2001-01-28 23:30:42 +00:00
thorpej
ad7259d7c6 Change uvm_analloc() to return a locked anon, update all callers,
and fix an anon locking protocol error in uvm_loanzero().
2001-01-23 02:27:39 +00:00
thorpej
13759f5310 Sprinkle some assertions:
amap_free(): Assert that the amap is locked.
amap_share_protect(): Assert that the amap is locked.
amap_wipeout(): Assert that the amap is locked.
uvm_anfree(): Assert that the anon has a reference count of 0 and is
              not locked.
uvm_anon_lockloanpg(): Assert that the anon is locked.
anon_pagein(): Assert that the anon is locked.
uvmfault_anonget(): Assert that the anon is locked.
uvm_pagealloc_strat(): Assert that the uobj or the anon is locked

And fix the problems these have uncovered:
amap_cow_now(): Lock the new anon after allocating it, and unref and
                unlock it (rather than lock!) before freeing it in case
                of an error condition.  This should fix a problem reported
		by Dan Carosone using cdrecord on an i386 MP kernel.
uvm_fault(): Case1B -- Lock the new anon afer allocating it, and unlock
             it later when we unlock the old anon.
	     Case2 -- Lock the new anon after allocating it, and unlock
	     it later by passing it to uvmfault_unlockall() (we set anon
	     to NULL if we're not doing a promote fault).
2001-01-23 01:56:16 +00:00
chs
aeda8d3b77 Initial integration of the Unified Buffer Cache project. 2000-11-27 08:39:39 +00:00