Commit Graph

458 Commits

Author SHA1 Message Date
thorpej
e2a791df22 Use pool_init() rather than pool_create(). 2001-05-09 23:20:59 +00:00
fvdl
defa9bf05f Avoid potential cases of sleeping while holding a spinlock. Pay attention
to SWF_FAKE when finding a swap device. GC swapdrum_add; it was only
a few lines long and called once, so just inline the code there.
2001-05-09 19:21:02 +00:00
thorpej
0b8c6fcc77 Fix a silly mistake I made when reworking the uvm inactive list
some time ago.  The mistake was to check that the page was not
referenced since the last active scan before moving it to inactive.
Now we just clear reference and move it to inacive (which is where
the second clock hand sweep occurs).
2001-05-07 22:01:28 +00:00
thorpej
04f36fcb9e Remove a comment which is no longer true. From Artur Grabowski. 2001-05-06 20:12:09 +00:00
ross
6b9d94cd8c Fix overflow errors in brk(2). 2001-05-06 04:32:08 +00:00
thorpej
31fafb678f Support dynamic sizing of the page color bins. We also support
dynamically re-coloring pages; as machine-dependent code discovers
the size of the system's caches, it may call uvm_page_recolor() with
the new number of colors to use.  If the new mumber of colors is
smaller (or equal to) the current number of colors, then uvm_page_recolor()
is a no-op.

The system defaults to one bucket if machine-dependent code does not
initialize uvmexp.ncolors before uvm_page_init() is called.

Note that the number of color bins should be initialized to something
reasonable as early as possible -- for many early memory allocations,
we live with the consequences of the page choice for the lifetime of
the boot.
2001-05-02 01:22:19 +00:00
thorpej
01e2971ba2 Add the number of page colors to uvmexp. 2001-05-01 19:36:56 +00:00
enami
1132ef7f20 Use simple do {} while () loop instead of for {} loop + extra test/variable. 2001-05-01 14:02:56 +00:00
enami
d211385f8a Fix second level indentation in recent commit. 2001-05-01 13:42:34 +00:00
thorpej
220bcf69ac Garbage-collect a comment that has not been applicable since Mach. 2001-05-01 03:01:18 +00:00
thorpej
cf67ac7122 Per discussion w/ chuck and chuck, restructure the md page stuff
to use a structure called "vm_page_md", and use __HAVE_VM_PAGE_MD
and __HAVE_PMAP_PHYSSEG.
2001-05-01 02:19:13 +00:00
thorpej
2b27ac7a99 Add a VM_MDPAGE_MEMBERS macro that defines pmap-specific data for
each vm_page structure.  Add a VM_MDPAGE_INIT() macro to init this
data when pages are initialized by UVM.  These macros are mandatory,
but ports may #define them to nothing if they are not needed/used.

This deprecates struct pmap_physseg.  As a transitional measure,
allow a port to #define PMAP_PHYSSEG so that it can continue to
use it until its pmap is converted to use VM_MDPAGE_MEMBERS.

Use all this stuff to eliminate a lot of extra work in the Alpha
pmap module (it's smaller and faster now).  Changes to other pmap
modules will follow.
2001-04-29 22:44:31 +00:00
thorpej
cda7baa0d5 Implement page coloring, using a round-robin bucket selection
algorithm (Solaris calls this "Bin Hopping").

This implementation currently relies on MD code to define a
constant defining the number of buckets.  This will change
reasonably soon (MD code will be able to dynamically size
the bucket array).
2001-04-29 04:23:20 +00:00
marcus
d317b08ca6 STDC cleanup: extra token not allowed after #endif. 2001-04-27 00:14:47 +00:00
thorpej
93b0af8f60 pmap_resident_count() always exists. Besides, returning the
value of vm_rssize is pointless -- it is never initialized to
anything other than 0.
2001-04-25 18:09:52 +00:00
thorpej
773ed79e5b Add a comment describing a problem. 2001-04-25 14:59:44 +00:00
thorpej
1c3a62e066 Sprinkle pmap_update() calls after calls to:
- pmap_enter()
- pmap_remove()
- pmap_protect()
- pmap_kenter_pa()
- pmap_kremove()
as described in pmap(9).

These calls are relatively conservative.  It may be possible to
optimize these a little more.
2001-04-24 04:30:50 +00:00
thorpej
0e325bb097 Some spring cleaning. 2001-04-24 00:19:00 +00:00
thorpej
55044638aa Remove pmap_kenter_pgs(). It was never really adopted by
anything, and the interface itself wasn't as flexible as
callers would have probably liked.
2001-04-22 23:42:11 +00:00
thorpej
69abdbf60c Undo a misguided previous change to the pmap_update() API. 2001-04-22 23:19:26 +00:00
thorpej
cfb5c7ed9f Make pmap_virtual_space() a required pmap function, even on platforms
which have pmap_steal_memory().  This is to reduce the API differences
between pmaps that implement pmap_steal_memory() and pmaps which do
not.

Note that pmap_steal_memory() needs to adjust *vstartp and/or
*vendp only if it used addresses within the range provided to UVM
via the pmap_virtual_space() call.  I.e. it is not necessary to do
so in any current pmap_steal_memory() implementation.
2001-04-22 17:22:57 +00:00
thorpej
4738622712 Give pmap_update() an argument (a pmap_t) so that it knows which
pmap it should be updating.
2001-04-22 00:33:59 +00:00
thorpej
0115ec662c The pmap_update() call at the end of uvm_swapout_threads() is
completely useless.  Nuke it.
2001-04-21 17:38:24 +00:00
thorpej
9e0af4a217 Add a __predict_true() to an extremely common case. 2001-04-12 21:11:47 +00:00
thorpej
eed75ba69e In uvm_km_kmemalloc(), use the correct size for the uvm_unmap()
call if the allocation fails.

Problem pointed out by Alfred Perlstein <bright@wintelcom.net>,
who found a similar bug in FreeBSD.
2001-04-12 21:08:25 +00:00
chuck
7074958a24 fix locking problem noted by Jaromir Dolecek. also, add more comments
on locking rules to make code easier to understand.   locking in
uvm_loananon still needs some work on fringe cases where anon's page
is actually on loan from a uobj.
2001-04-10 00:53:21 +00:00
jdolecek
5bd42953f7 Upon Chuck Cranor request, revert rev. 1.26. There is indeed a bug in way
locking is done, but this fix is not the right way to fix it.
2001-04-09 06:21:03 +00:00
jdolecek
17c0a84170 Remove superflous uvmfault_unlockmaps() in uvm_loan(), only call it
if uvm_loanentry() returned 0; otherwise, the unlocking would already
have been done by uvmfault_unlockall() call in  uvm_loanentry().
Okay'ed by Chuck Silvers
2001-04-08 16:51:51 +00:00
chs
088989a557 undo the part of a previous commit which turned a check for faulting
on an "intrsafe" map into a KASSERT.  this situation can be caused by
an application accessing /dev/kmem.
2001-04-01 16:45:53 +00:00
chs
11fe9ca446 use ubc_winshift instead of ubc_winsize in pmaps to set up kernel
virtual space.  the latter isn't initialized yet when the value is needed.
fixes PR 12440.
2001-03-21 03:16:05 +00:00
simonb
d618ec62ad In sys_obreak(), the return value of atop() was being used to change
the process dsize for both positive and negative changes.  Since atop()
casts its result to a paddr_t (which is unsigned), negative changes in
process data size resulted in unrealistic dsizes being set.  Use
"dsize -= atop(-diff)" for a negative diffs.  Fixes the "Impossible
process sizes" mentioned on current-users.

Unsigned cast catch and much debugging help from Martin Laubach.
2001-03-19 02:25:33 +00:00
chs
c40daf0aed change uvm_winsize to uvm_winshift so that we can avoid division
by a non-constant value.
2001-03-19 00:29:03 +00:00
chs
edb041f0d1 return the real error from pgo_fault(). 2001-03-17 04:01:24 +00:00
chs
19accb3d77 return the real error from VOP_GETPAGES(). 2001-03-17 04:01:02 +00:00
chs
ac3bc537bd eliminate the KERN_* error codes in favor of the traditional E* codes.
the mapping is:

KERN_SUCCESS			0
KERN_INVALID_ADDRESS		EFAULT
KERN_PROTECTION_FAILURE		EACCES
KERN_NO_SPACE			ENOMEM
KERN_INVALID_ARGUMENT		EINVAL
KERN_FAILURE			various, mostly turn into KASSERTs
KERN_RESOURCE_SHORTAGE		ENOMEM
KERN_NOT_RECEIVER		<unused>
KERN_NO_ACCESS			<unused>
KERN_PAGES_LOCKED		<unused>
2001-03-15 06:10:32 +00:00
chs
dd82ad8e2c eliminate the VM_PAGER_* error codes in favor of the traditional E* codes.
the mapping is:

VM_PAGER_OK		        0
VM_PAGER_BAD		        <unused>
VM_PAGER_FAIL		        <unused>
VM_PAGER_PEND		        0 (see below)
VM_PAGER_ERROR		        EIO
VM_PAGER_AGAIN		        EAGAIN
VM_PAGER_UNLOCK		        EBUSY
VM_PAGER_REFAULT	        ERESTART

for async i/o requests, it used to be possible for the request to
be convert to sync, and the pager would return VM_PAGER_OK or VM_PAGER_PEND
to indicate whether the caller should perform post-i/o cleanup.
this is no longer allowed; pagers must now return 0 to indicate that
the async i/o was successfully started, and the caller never needs to
worry about doing the post-i/o cleanup.
2001-03-10 22:46:45 +00:00
chs
83d071a318 add UBC memory-usage balancing. we track the number of pages in use for
each of the basic types (anonymous data, executable image, cached files)
and prevent the pagedaemon from reusing a given page if that would reduce
the count of that type of page below a sysctl-setable minimum threshold.
the thresholds are controlled via three new sysctl tunables:
vm.anonmin, vm.vnodemin, and vm.vtextmin.  these tunables are the
percentages of pageable memory reserved for each usage, and we do not allow
the sum of the minimums to be more than 95% so that there's always some
memory that can be reused.
2001-03-09 01:02:10 +00:00
enami
79dbb12278 When shrinking file size, don't dispose of a page still in use. 2001-02-22 01:02:09 +00:00
chs
19b7b64642 clean up DIAGNOSTIC checks, use KASSERT(). 2001-02-18 21:19:08 +00:00
chs
7b76ca8254 in uvn_flush(), add a fast path for the case where the vnode has no pages.
update the comment above this function while I'm here.
2001-02-18 19:40:25 +00:00
chs
4808c1dfb5 in uvm_aio_aiodone(), don't mark the page(s) clean if the pageout
failed because we failed to acquire some resource needed to initiate
the pageout (such as failing to lock an indirect buffer) rather than
a hard i/o error.  in this case we just want to reactivate the page(s)
so that we'll try to write them again later.

while I'm here, clean up some DIAGNOSTIC code.
2001-02-18 19:26:50 +00:00
pk
dca7b5b472 SWAP_DUMPDEV,SWAP_OFF cases: make sure to release the vnode being operated on. 2001-02-12 11:50:50 +00:00
eeh
4589ac3292 When recycling a vm_map, resize it to the new process address space limits. 2001-02-11 01:34:23 +00:00
thorpej
b016744976 Don't uvm_deallocate() the address space in exit1(). The address
space is already torn down in uvmspace_free() when the vmspace
refrence count reaches 0.  Move the shmexit() call into uvmspace_free().

Note that there is a beneficial side-effect of deferring the unmap
to uvmspace_free() -- on systems where TLB invalidations are
particularly expensive, the unmapping of the address space won't
have to cause TLB invalidations; uvmspace_free() is going to be
run in a context other than the exiting process's, so the "pmap is
active" test will evaluate to FALSE in the pmap module.
2001-02-10 05:05:27 +00:00
chs
4be5f47040 remove a debug printf() that has outlived its usefulness. 2001-02-08 06:43:05 +00:00
eeh
ec22628573 Move maxdmap and maxsmap where they belong and make them big enough. 2001-02-06 19:54:43 +00:00
eeh
4380259bc7 Specify a process' address space limits for uvmspace_exec(). 2001-02-06 17:01:51 +00:00
chs
43eb344e3f in uvn_flush(), interpret a "stop" value of 0 as meaning all pages at
offsets equal to or higher than "start".  use this in uvm_vnp_setsize()
instead of the vnode's size since there can be pages past EOF.
2001-02-06 10:53:23 +00:00
chs
4d5451090e in uvm_map_clean(), fix the case where the start offset is within the last
entry in the map.  the old code would walk around the end of the linked list,
through the header entry, and keep going from the first map entry until it
found a gap in the map, at which point it would return an error.  if the map
had no gaps then it would loop forever.  reported by k-abe@cs.utah.edu.
while I'm here, clean up this function a bit.

also, use MIN() instead of min(), since the latter takes arguments of
type "int" but we're passing it values of type "vaddr_t", which can be
a larger size.
2001-02-05 11:29:54 +00:00
mrg
6e26ebea51 allow ubchist to be printed from the uvmhist merging uvm_hist() 2001-02-04 10:55:58 +00:00