Commit Graph

429 Commits

Author SHA1 Message Date
chs
11fe9ca446 use ubc_winshift instead of ubc_winsize in pmaps to set up kernel
virtual space.  the latter isn't initialized yet when the value is needed.
fixes PR 12440.
2001-03-21 03:16:05 +00:00
simonb
d618ec62ad In sys_obreak(), the return value of atop() was being used to change
the process dsize for both positive and negative changes.  Since atop()
casts its result to a paddr_t (which is unsigned), negative changes in
process data size resulted in unrealistic dsizes being set.  Use
"dsize -= atop(-diff)" for a negative diffs.  Fixes the "Impossible
process sizes" mentioned on current-users.

Unsigned cast catch and much debugging help from Martin Laubach.
2001-03-19 02:25:33 +00:00
chs
c40daf0aed change uvm_winsize to uvm_winshift so that we can avoid division
by a non-constant value.
2001-03-19 00:29:03 +00:00
chs
edb041f0d1 return the real error from pgo_fault(). 2001-03-17 04:01:24 +00:00
chs
19accb3d77 return the real error from VOP_GETPAGES(). 2001-03-17 04:01:02 +00:00
chs
ac3bc537bd eliminate the KERN_* error codes in favor of the traditional E* codes.
the mapping is:

KERN_SUCCESS			0
KERN_INVALID_ADDRESS		EFAULT
KERN_PROTECTION_FAILURE		EACCES
KERN_NO_SPACE			ENOMEM
KERN_INVALID_ARGUMENT		EINVAL
KERN_FAILURE			various, mostly turn into KASSERTs
KERN_RESOURCE_SHORTAGE		ENOMEM
KERN_NOT_RECEIVER		<unused>
KERN_NO_ACCESS			<unused>
KERN_PAGES_LOCKED		<unused>
2001-03-15 06:10:32 +00:00
chs
dd82ad8e2c eliminate the VM_PAGER_* error codes in favor of the traditional E* codes.
the mapping is:

VM_PAGER_OK		        0
VM_PAGER_BAD		        <unused>
VM_PAGER_FAIL		        <unused>
VM_PAGER_PEND		        0 (see below)
VM_PAGER_ERROR		        EIO
VM_PAGER_AGAIN		        EAGAIN
VM_PAGER_UNLOCK		        EBUSY
VM_PAGER_REFAULT	        ERESTART

for async i/o requests, it used to be possible for the request to
be convert to sync, and the pager would return VM_PAGER_OK or VM_PAGER_PEND
to indicate whether the caller should perform post-i/o cleanup.
this is no longer allowed; pagers must now return 0 to indicate that
the async i/o was successfully started, and the caller never needs to
worry about doing the post-i/o cleanup.
2001-03-10 22:46:45 +00:00
chs
83d071a318 add UBC memory-usage balancing. we track the number of pages in use for
each of the basic types (anonymous data, executable image, cached files)
and prevent the pagedaemon from reusing a given page if that would reduce
the count of that type of page below a sysctl-setable minimum threshold.
the thresholds are controlled via three new sysctl tunables:
vm.anonmin, vm.vnodemin, and vm.vtextmin.  these tunables are the
percentages of pageable memory reserved for each usage, and we do not allow
the sum of the minimums to be more than 95% so that there's always some
memory that can be reused.
2001-03-09 01:02:10 +00:00
enami
79dbb12278 When shrinking file size, don't dispose of a page still in use. 2001-02-22 01:02:09 +00:00
chs
19b7b64642 clean up DIAGNOSTIC checks, use KASSERT(). 2001-02-18 21:19:08 +00:00
chs
7b76ca8254 in uvn_flush(), add a fast path for the case where the vnode has no pages.
update the comment above this function while I'm here.
2001-02-18 19:40:25 +00:00
chs
4808c1dfb5 in uvm_aio_aiodone(), don't mark the page(s) clean if the pageout
failed because we failed to acquire some resource needed to initiate
the pageout (such as failing to lock an indirect buffer) rather than
a hard i/o error.  in this case we just want to reactivate the page(s)
so that we'll try to write them again later.

while I'm here, clean up some DIAGNOSTIC code.
2001-02-18 19:26:50 +00:00
pk
dca7b5b472 SWAP_DUMPDEV,SWAP_OFF cases: make sure to release the vnode being operated on. 2001-02-12 11:50:50 +00:00
eeh
4589ac3292 When recycling a vm_map, resize it to the new process address space limits. 2001-02-11 01:34:23 +00:00
thorpej
b016744976 Don't uvm_deallocate() the address space in exit1(). The address
space is already torn down in uvmspace_free() when the vmspace
refrence count reaches 0.  Move the shmexit() call into uvmspace_free().

Note that there is a beneficial side-effect of deferring the unmap
to uvmspace_free() -- on systems where TLB invalidations are
particularly expensive, the unmapping of the address space won't
have to cause TLB invalidations; uvmspace_free() is going to be
run in a context other than the exiting process's, so the "pmap is
active" test will evaluate to FALSE in the pmap module.
2001-02-10 05:05:27 +00:00
chs
4be5f47040 remove a debug printf() that has outlived its usefulness. 2001-02-08 06:43:05 +00:00
eeh
ec22628573 Move maxdmap and maxsmap where they belong and make them big enough. 2001-02-06 19:54:43 +00:00
eeh
4380259bc7 Specify a process' address space limits for uvmspace_exec(). 2001-02-06 17:01:51 +00:00
chs
43eb344e3f in uvn_flush(), interpret a "stop" value of 0 as meaning all pages at
offsets equal to or higher than "start".  use this in uvm_vnp_setsize()
instead of the vnode's size since there can be pages past EOF.
2001-02-06 10:53:23 +00:00
chs
4d5451090e in uvm_map_clean(), fix the case where the start offset is within the last
entry in the map.  the old code would walk around the end of the linked list,
through the header entry, and keep going from the first map entry until it
found a gap in the map, at which point it would return an error.  if the map
had no gaps then it would loop forever.  reported by k-abe@cs.utah.edu.
while I'm here, clean up this function a bit.

also, use MIN() instead of min(), since the latter takes arguments of
type "int" but we're passing it values of type "vaddr_t", which can be
a larger size.
2001-02-05 11:29:54 +00:00
mrg
6e26ebea51 allow ubchist to be printed from the uvmhist merging uvm_hist() 2001-02-04 10:55:58 +00:00
mrg
0c151f32c8 add a KASSERT(pp) in the uvm_pagermapin() loop. 2001-02-04 10:55:12 +00:00
enami
dda0d50982 Explicitly panic if failed to allocate some memory during initialization. 2001-02-02 01:55:52 +00:00
thorpej
1779f8f71b Page scanner improvements, behavior is actually a bit more like
Mach VM's now.  Specific changes:
- Pages now need not have all of their mappings removed before being
  put on the inactive list.  They only need to have the "referenced"
  attribute cleared.  This makes putting pages onto the inactive list
  much more efficient.  In order to eliminate redundant clearings of
  "refrenced", callers of uvm_pagedeactivate() must now do this
  themselves.
- When checking the "modified" attribute for a page (for clearing
  PG_CLEAN), make sure to only do it if PG_CLEAN is currently set on
  the page (saves a potentially expensive pmap operation).
- When scanning the inactive list, if a page is referenced, reactivate
  it (this part was actually added in uvm_pdaemon.c,v 1.27).  This
  now works properly now that pages on the inactive list are allowed to
  have mappings.
- When scanning the inactive list and considering a page for freeing,
  remove all mappings, and then check the "modified" attribute if the
  page is marked PG_CLEAN.
- When scanning the active list, if the page was referenced since its
  last sweep by the scanner, don't deactivate it.  (This part was
  actually added in uvm_pdaemon.c,v 1.28.)

These changes greatly improve interactive performance during
moderate to high memory and I/O load.
2001-01-28 23:30:42 +00:00
thorpej
1cdff48674 Put the extern decl of uvm_vnodeops in uvm_object.h 2001-01-28 22:23:04 +00:00
thorpej
5849b86934 Use UVM_OBJ_IS_VNODE(). 2001-01-28 22:14:52 +00:00
thorpej
d1c3f6bab3 Define a UVM_OBJ_IS_VNODE() macro to test if an object is a vnode. 2001-01-28 22:14:28 +00:00
thorpej
37247109d1 When considering a page for deactivation, check to see if the
page has been referenced since the last time it was considered.
If it was, don't deactivate the page.
2001-01-25 00:24:48 +00:00
mycroft
91a4c18e32 Put back the pmap_is_referenced() check from the original UVM code in the
inactive list scans.  Without this, the referenced bit was essentially ignored.
2001-01-25 00:10:03 +00:00
thorpej
ad7259d7c6 Change uvm_analloc() to return a locked anon, update all callers,
and fix an anon locking protocol error in uvm_loanzero().
2001-01-23 02:27:39 +00:00
thorpej
13759f5310 Sprinkle some assertions:
amap_free(): Assert that the amap is locked.
amap_share_protect(): Assert that the amap is locked.
amap_wipeout(): Assert that the amap is locked.
uvm_anfree(): Assert that the anon has a reference count of 0 and is
              not locked.
uvm_anon_lockloanpg(): Assert that the anon is locked.
anon_pagein(): Assert that the anon is locked.
uvmfault_anonget(): Assert that the anon is locked.
uvm_pagealloc_strat(): Assert that the uobj or the anon is locked

And fix the problems these have uncovered:
amap_cow_now(): Lock the new anon after allocating it, and unref and
                unlock it (rather than lock!) before freeing it in case
                of an error condition.  This should fix a problem reported
		by Dan Carosone using cdrecord on an i386 MP kernel.
uvm_fault(): Case1B -- Lock the new anon afer allocating it, and unlock
             it later when we unlock the old anon.
	     Case2 -- Lock the new anon after allocating it, and unlock
	     it later by passing it to uvmfault_unlockall() (we set anon
	     to NULL if we're not doing a promote fault).
2001-01-23 01:56:16 +00:00
thorpej
f4395a4eae splimp() -> splvm() 2001-01-14 02:10:01 +00:00
pk
f134ba4486 atop(): cast argument to paddr_t' (instead of u_long') to avoid
truncating the address.
2001-01-09 13:55:20 +00:00
chs
f0ff6fc897 in uvn_flush(), when PGO_SYNCIO is specified then we should wait for
pending i/os to complete before returning even if PGO_CLEANIT is not
specified.  this fixes two races:

 (1) NFS write rpcs vs. setattr operations which truncate the file.
     if the truncate doesn't wait for pending writes to complete then
     a later write rpc completion can undo the effect of the truncate.
     this problem has been reported by several people.

 (2) write i/os in disk-based filesystem vs. the disk block being
     freed by a truncation, allocated to a new file, and written
     again with different data.  if the disk driver reorders the requests
     and does the second i/o first, the old data will clobber the new,
     corrupting the new file.  I haven't heard of anyone experiencing
     this problem yet, but it's fixed now anyway.
2001-01-08 06:21:13 +00:00
thorpej
4d4b2b5626 Nevermind that it's silly to include PROT_EXEC even if a vnode
doesn't have the exec bit set, we need to have PROT_EXEC set
in order for some expected mmap/mprotect behavior to work, so
do the last bit slightly differently: if udv_attach() fails, and
the protection (NOT maxprot) doens't include PROT_EXEC, then clear
PROT_EXEC from maxprot and try udv_attach() again.

Sigh, mmap really needs to be rototilled.
2001-01-08 01:35:03 +00:00
thorpej
781516b080 Only include PROT_EXEC in maxprot if the user specified PROT_EXEC
in the mmap() call.  maxprot is used to create device mappings,
and always including PROT_EXEC causes the mapping to fail on the Alpha
when mapping a non-RAM offset of /dev/mem (which may be sparse, so
instruction fetch from there is disallowed).
2001-01-07 06:16:46 +00:00
enami
f306f72978 Use cast where appropriate to avoid integer overflow. 2001-01-04 06:07:18 +00:00
chs
1e651a1688 remove some more leftovers from Mach. 2000-12-28 08:24:55 +00:00
chs
89b005fc27 when we fail to allocate anons to represent new swap space,
just return an error rather than panicing.
2000-12-27 09:17:04 +00:00
chs
910b4f2e20 fix some types so that files larger than 4GB work. 2000-12-27 09:01:45 +00:00
chs
de569051ad VOP_GETPAGES() returns an E* error code, not a VM_PAGER_* error code. 2000-12-27 04:44:42 +00:00
enami
6ff137de16 Place a name of extent in a struct swapdev instead of dynamically
allocating it.
2000-12-23 12:13:05 +00:00
enami
4e59adc1bb s/UBC_WINSIZE/ubc_winsize/g except the variable initialization. 2000-12-21 03:37:59 +00:00
chs
fc03073896 expose the tunables ubc_nwins and ubc_winsize in uvm_param.h.
add the space used by UBC mappings to the initial PTE calculations
for pmaps that do that (mips and alpha).
2000-12-21 00:52:01 +00:00
chs
34a059b354 in uvn_flush(), don't deactivate busy pages. 2000-12-16 06:17:09 +00:00
chs
cf25b3fa04 continue processing the inactive queue past the free target when
we're enforcing the limit on the number of vnode pages.
2000-12-13 17:03:32 +00:00
enami
4625dcde2e Use single const char array instead of over 200 string constant. 2000-12-13 08:06:11 +00:00
chs
837f5c9bd6 we don't need VM_PROT_EXECUTE for UBC mappings. 2000-12-10 19:28:09 +00:00
chs
cae7ac2e3a in uvm_pagermapin(), for now, don't pass the flag to pmap_enter()
which presets the page modified bit if the page is already initialized.
we don't actually want to modify such pages.
2000-12-09 23:26:27 +00:00
chs
a8609aaac8 in uvn_findpage(), only increment the counter of vnode pages
if we succeed in allocating a page.

from Lars Heidieker <lars@heidieker.de> in PR 11636.
2000-12-06 03:37:30 +00:00