Commit Graph

59 Commits

Author SHA1 Message Date
chs
edb041f0d1 return the real error from pgo_fault(). 2001-03-17 04:01:24 +00:00
chs
ac3bc537bd eliminate the KERN_* error codes in favor of the traditional E* codes.
the mapping is:

KERN_SUCCESS			0
KERN_INVALID_ADDRESS		EFAULT
KERN_PROTECTION_FAILURE		EACCES
KERN_NO_SPACE			ENOMEM
KERN_INVALID_ARGUMENT		EINVAL
KERN_FAILURE			various, mostly turn into KASSERTs
KERN_RESOURCE_SHORTAGE		ENOMEM
KERN_NOT_RECEIVER		<unused>
KERN_NO_ACCESS			<unused>
KERN_PAGES_LOCKED		<unused>
2001-03-15 06:10:32 +00:00
chs
dd82ad8e2c eliminate the VM_PAGER_* error codes in favor of the traditional E* codes.
the mapping is:

VM_PAGER_OK		        0
VM_PAGER_BAD		        <unused>
VM_PAGER_FAIL		        <unused>
VM_PAGER_PEND		        0 (see below)
VM_PAGER_ERROR		        EIO
VM_PAGER_AGAIN		        EAGAIN
VM_PAGER_UNLOCK		        EBUSY
VM_PAGER_REFAULT	        ERESTART

for async i/o requests, it used to be possible for the request to
be convert to sync, and the pager would return VM_PAGER_OK or VM_PAGER_PEND
to indicate whether the caller should perform post-i/o cleanup.
this is no longer allowed; pagers must now return 0 to indicate that
the async i/o was successfully started, and the caller never needs to
worry about doing the post-i/o cleanup.
2001-03-10 22:46:45 +00:00
chs
19b7b64642 clean up DIAGNOSTIC checks, use KASSERT(). 2001-02-18 21:19:08 +00:00
thorpej
1779f8f71b Page scanner improvements, behavior is actually a bit more like
Mach VM's now.  Specific changes:
- Pages now need not have all of their mappings removed before being
  put on the inactive list.  They only need to have the "referenced"
  attribute cleared.  This makes putting pages onto the inactive list
  much more efficient.  In order to eliminate redundant clearings of
  "refrenced", callers of uvm_pagedeactivate() must now do this
  themselves.
- When checking the "modified" attribute for a page (for clearing
  PG_CLEAN), make sure to only do it if PG_CLEAN is currently set on
  the page (saves a potentially expensive pmap operation).
- When scanning the inactive list, if a page is referenced, reactivate
  it (this part was actually added in uvm_pdaemon.c,v 1.27).  This
  now works properly now that pages on the inactive list are allowed to
  have mappings.
- When scanning the inactive list and considering a page for freeing,
  remove all mappings, and then check the "modified" attribute if the
  page is marked PG_CLEAN.
- When scanning the active list, if the page was referenced since its
  last sweep by the scanner, don't deactivate it.  (This part was
  actually added in uvm_pdaemon.c,v 1.28.)

These changes greatly improve interactive performance during
moderate to high memory and I/O load.
2001-01-28 23:30:42 +00:00
thorpej
ad7259d7c6 Change uvm_analloc() to return a locked anon, update all callers,
and fix an anon locking protocol error in uvm_loanzero().
2001-01-23 02:27:39 +00:00
thorpej
13759f5310 Sprinkle some assertions:
amap_free(): Assert that the amap is locked.
amap_share_protect(): Assert that the amap is locked.
amap_wipeout(): Assert that the amap is locked.
uvm_anfree(): Assert that the anon has a reference count of 0 and is
              not locked.
uvm_anon_lockloanpg(): Assert that the anon is locked.
anon_pagein(): Assert that the anon is locked.
uvmfault_anonget(): Assert that the anon is locked.
uvm_pagealloc_strat(): Assert that the uobj or the anon is locked

And fix the problems these have uncovered:
amap_cow_now(): Lock the new anon after allocating it, and unref and
                unlock it (rather than lock!) before freeing it in case
                of an error condition.  This should fix a problem reported
		by Dan Carosone using cdrecord on an i386 MP kernel.
uvm_fault(): Case1B -- Lock the new anon afer allocating it, and unlock
             it later when we unlock the old anon.
	     Case2 -- Lock the new anon after allocating it, and unlock
	     it later by passing it to uvmfault_unlockall() (we set anon
	     to NULL if we're not doing a promote fault).
2001-01-23 01:56:16 +00:00
chs
aeda8d3b77 Initial integration of the Unified Buffer Cache project. 2000-11-27 08:39:39 +00:00
thorpej
6eb78dcb4e Update a comment in uvmfault_anonget() to reflect reality, and
make uvm_fault() handle uvmfault_anonget() failure properly (i.e.
don't unlock a lock that's already unlocked).
2000-08-06 00:22:53 +00:00
mrg
dea44a9ec4 remove include of <vm/vm.h> 2000-06-27 17:29:17 +00:00
mrg
2f159a1bac remove/move more mach vm header files:
<vm/pglist.h> -> <uvm/uvm_pglist.h>
	<vm/vm_inherit.h> -> <uvm/uvm_inherit.h>
	<vm/vm_kern.h> -> into <uvm/uvm_extern.h>
	<vm/vm_object.h> -> nothing
	<vm/vm_pager.h> -> into <uvm/uvm_pager.h>

also includes a bunch of <vm/vm_page.h> include removals (due to redudancy
with <vm/vm.h>), and a scattering of other similar headers.
2000-06-26 14:20:25 +00:00
thorpej
eeb3a38cfc Use UVM_PGA_ZERO in the promote-zero-fault case of uvm_fault(). 2000-04-10 01:17:41 +00:00
chs
16f0ca3612 add support for ``swapctl -d'' (removing swap space).
improve handling of i/o errors in swap space.

reviewed by:  Chuck Cranor
2000-01-11 06:57:49 +00:00
thorpej
1da427a80a Change the pmap_enter() API slightly; pmap_enter() now returns an error
value (KERN_SUCCESS or KERN_RESOURCE_SHORTAGE) indicating if it succeeded
or failed.  Change the `wired' and `access_type' arguments to a single
`flags' argument, which includes the access type, and flags:

	PMAP_WIRED	the old `wired' boolean
	PMAP_CANFAIL	pmap_enter() is allowed to fail

If PMAP_CANFAIL is not specified, the pmap should behave as it always
has in the face of a drastic resource shortage: fall over dead.

Change the fault handler to deal with failure (which indicates resource
shortage) by unlocking everything, waiting for the pagedaemon to free
more memory, then retrying the fault.
1999-11-13 00:24:38 +00:00
chs
f3a668ed84 eliminate the PMAP_NEW option by making it required for all ports.
ports which previously had no support for PMAP_NEW now implement
the pmap_k* interfaces as wrappers around the non-k versions.
1999-09-12 01:16:55 +00:00
thorpej
3f176180d5 Garbage collect thread_sleep()/thread_wakeup() left over from the old
Mach VM code.  Also nuke iprintf(), which was no longer used anywhere.

Add proclist locking where appropriate.
1999-07-22 22:58:38 +00:00
cgd
4eb46531af make sure 'wide' fault handling is actually done only once per fault.
('narrow' was mistakenly set to FALSE instead of TRUE.)  Committed after
discussion with chuq.
1999-07-19 19:02:22 +00:00
thorpej
ff05773b4a Back out the change I made yesterday. It seems to cause some trouble
for some folks.
1999-07-11 17:47:12 +00:00
thorpej
a0555db3e0 Simplify uvm_fault_unwire_locked() a little. 1999-07-10 21:46:56 +00:00
thorpej
3ebbe095e0 Change the pmap_extract() interface to:
boolean_t pmap_extract(pmap_t, vaddr_t, paddr_t *);
This makes it possible for the pmap to map physical address 0.
1999-07-08 18:05:21 +00:00
thorpej
0288ffb53a pmap_change_wiring() -> pmap_unwire(). 1999-06-17 19:23:20 +00:00
thorpej
f5a527bb4e Remove pmap_pageable(); no pmap implements it, and it is not really useful,
because pmap_enter()/pmap_change_wiring() (soon to be pmap_unwire())
communicate the information in greater detail.
1999-06-17 18:21:21 +00:00
thorpej
d1d9b366cd When unwiring a range in uvm_fault_unwire_locked(), don't call
pmap_change_wiring(...,FALSE) unless the map entry claims the address
is unwired.  This fixes the following scenario, as described on
tech-kern@netbsd.org on Wed 6/16/1999 12:25:23:

	- User mlock(2)'s a buffer, to guarantee it will never become
	  non-resident while he is using it.

	- User then does physio to that buffer.  Physio calls uvm_vslock()
	  to lock down the pages and ensure that page faults do not happen
	  while the I/O is in progress (possibly in interrupt context).

	- Physio does the I/O.

	- Physio calls uvm_vsunlock().  This calls uvm_fault_unwire().

	  >>> HERE IS WHERE THE PROBLEM OCCURS <<<

	  uvm_fault_unwire() calls pmap_change_wiring(..., FALSE),
	  which now gives the pmap free reign to recycle the mapping
	  information for that page, which is illegal; the mapping is
	  still wired (due to the mlock(2)), but now access of the
	  page could cause a non-protection page fault (disallowed).

	  NOTE: This could eventually lead to a panic when the user
	  subsequently munlock(2)'s the buffer and the mapping info
	  has been recycled for use by another mapping!
1999-06-16 23:02:40 +00:00
thorpej
b861180119 * Rename uvm_fault_unwire() to uvm_fault_unwire_locked(), and require that
the map be at least read-locked to call this function.  This requirement
  will be taken advantage of in a future commit.
* Write a uvm_fault_unwire() wrapper which read-locks the map and calls
  uvm_fault_unwire_locked().
* Update the comments describing the locking contraints of uvm_fault_wire()
  and uvm_fault_unwire().
1999-06-16 22:11:23 +00:00
thorpej
23c6eb95d3 Remove a incorrect-and-no-longer-relevant comment. 1999-06-16 18:43:28 +00:00
thorpej
ee9703dea9 Add a macro to test if a map entry is wired. 1999-06-16 00:29:04 +00:00
thorpej
2c3dc83a64 Keep interrupt-safe maps on an additional queue. In uvm_fault(), if we're
looking up a kernel address, check to see if the address is on this
"interrupt-safe" list.  If so, return failure immediately.  This prevents
a locking screw if a page fault is taken on an interrupt-safe map in or
out of interrupt context.
1999-06-04 23:38:41 +00:00
thorpej
acf81da21e A page fault on a non-pageable map is always fatal. 1999-06-02 23:26:21 +00:00
thorpej
8d8badbd8f Make uvm_fault_unwire() take a vm_map_t, rather than a pmap_t, for
consistency.  Use this opportunity for checking for intrsafe map use
in this routine (which is illegal).
1999-05-28 20:49:51 +00:00
thorpej
195c1a2741 Pass an access_type to uvm_fault_wire(), which it forwards on to
uvm_fault().
1999-05-26 00:32:42 +00:00
chs
a5d3e8dae9 when wiring swap-backed pages, clear the PG_CLEAN flag before
releasing any swap resources.  if we don't do this, we can
end up with a clean, swap-backed page, which is illegal.
tracked down by Bill Sommerfeld, fixes PR 7578.
1999-05-19 06:14:15 +00:00
chs
f455dd6596 add a `flags' argument to uvm_pagealloc_strat().
define a flag UVM_PGA_USERESERVE to allow non-kernel object
allocations to use pages from the reserve.
use the new flag for allocations in pmap modules.
1999-04-11 04:04:04 +00:00
mycroft
671c65c6da Duuuh. Back and front pages should have an access_type of 0, since we don't
know they're going to be used.  What was I thinking??
1999-03-29 05:43:31 +00:00
mycroft
0ce76ca08b Reduce the access_type for copy-on-write pages in the front and back regions. 1999-03-28 21:48:50 +00:00
mycroft
8ed77cabd0 Fix a case I missed in the previous. 1999-03-28 21:01:25 +00:00
mycroft
4831b815f5 Only turn off VM_PROT_WRITE for COW pages; not VM_PROT_EXECUTE. 1999-03-28 19:53:49 +00:00
mycroft
31a2536cd0 Add a new `access type' argument to pmap_enter(). This indicates what type of
memory access a mapping was caused by.  This is passed through from uvm_fault()
and udv_fault(), and in most other cases is 0.
The pmap module may use this to preset R/M information.  On MMUs which require
R/M emulation, the implementation may preset the bits and avoid taking another
fault.  On MMUs which keep R/M information in hardware, the implementation may
preset its cached bits to speed up the next call to pmap_is_modified() or
pmap_is_referenced().
1999-03-26 21:58:39 +00:00
chs
d97d75d81b add uvmexp.swpgonly and use it to detect out-of-swap conditions. 1999-03-26 17:34:15 +00:00
mrg
a0139bc39d remove now >1 year old pre-release message. 1999-03-25 18:48:49 +00:00
mrg
08fd4f3f47 80 cols. 1999-01-31 09:27:18 +00:00
chuck
44f5fc2839 cleanup/reorg:
- break anon related functions out of uvm_amap.c and put them in their own
  file (uvm_anon.c).  includes break up uvm_anon_init into an amap and an
  an anon init function
- ensure that only functions within the amap module access amap structure
  fields (add macros to amap api as needed)
1999-01-24 23:53:14 +00:00
chuck
cc2f45083b update outdated an_swslot comments 1998-11-20 19:37:06 +00:00
mrg
7c0d69c3ab minor KNF nits 1998-11-07 05:50:19 +00:00
chs
28411139b3 be consistent with locking of amaps and anons when freeing them. 1998-11-04 07:07:22 +00:00
chs
549cd579e5 shift by PAGE_SHIFT instead of multiplying or dividing by PAGE_SIZE. 1998-10-18 23:49:59 +00:00
tv
000978aaca Check for gcc the Right way when quashing -Wuninitialized goop. 1998-10-16 19:34:57 +00:00
chuck
1b59a238c4 remove unused share map code from UVM:
- simplify uvm_faultinfo in uvm_fault.h (parent map tracking no longer needed)
 - adjust locking and lookup functions in uvm_fault_i.h to reflect the above
 - replace ufi.rvaddr with ufi.orig_rvaddr in uvm_fault.c since rvaddr is
	no longer needed.
 - no need to worry about share map translations in uvm_fault().  simplify.
1998-10-11 23:07:42 +00:00
eeh
a2dd74ed79 Merge paddr_t changes into the main branch. 1998-08-13 02:10:37 +00:00
mark
7689b22688 Use the sparc's GCC lossage fix for the arm32 port as well. Problem appears
to be a compiler bug resulting in an 'variable possibly used uninitialised'
warning when optimisation is used.
1998-06-02 20:51:24 +00:00
kleink
182e12f413 Remove inclusions of syscall (and syscall argument) related header files;
we don't need them here.
1998-05-05 20:51:04 +00:00