Commit Graph

573 Commits

Author SHA1 Message Date
simonb 82649768b7 Change some unsigned int variables and parameters to plain ints so
that all usages of those agree on unsigned vs. signed.
2001-11-06 06:31:06 +00:00
simonb 819bb532e6 Remove some variables that are set but never used. 2001-11-06 06:28:22 +00:00
chs 6e1dd2fa31 add an assert and rename some variables. 2001-11-06 05:44:25 +00:00
chs d8cbdbb0da in uvm_exit(), don't bother to unwire the uarea before we free it,
the pages will be freed anyway.
2001-11-06 05:34:42 +00:00
chs 07d2ec83fe don't call pmap_copy() from uvmspace_fork().
a new process is very likely to call execve() immediately after fork(),
so most of the time copying the pmap mappings is wasted effort.
2001-11-06 05:27:17 +00:00
chs 550caf0ce3 allow SWAP_GETDUMPDEV for all users.
use {LIST,TAILQ}_FOREACH where appropriate.
2001-11-01 03:49:30 +00:00
thorpej f67e15c839 uvm_map_protect(): Don't allow VM_PROT_EXECUTE to be set on entries
(either the current protection or the max protection) that reference
vnodes associated with a file system mounted with the NOEXEC option.

uvm_mmap(): Don't allow PROT_EXEC mappings to be established of vnodes
which are associated with a file system mounted with the NOEXEC option.
2001-10-30 19:05:26 +00:00
thorpej a2cd7623d4 Correct a comment. 2001-10-30 18:52:17 +00:00
thorpej e8ee04475d - Add a new vnode flag VEXECMAP, which indicates that a vnode has
executable mappings.  Stop overloading VTEXT for this purpose (VTEXT
  also has another meaning).
- Rename vn_marktext() to vn_markexec(), and use it when executable
  mappings of a vnode are established.
- In places where we want to set VTEXT, set it in v_flag directly, rather
  than making a function call to do this (it no longer makes sense to
  use a function call, since we no longer overload VTEXT with VEXECMAP's
  meaning).

VEXECMAP suggested by Chuq Silvers.
2001-10-30 15:32:01 +00:00
thorpej 7285b2c290 uvm_mmap(): If a vnode mapping is established with PROT_EXEC, mark the
vnode as VTEXT.

uvm_map_protect(): When VM_PROT_EXECUTE is added to a VA range, mark
all the vnodes mapped by the range as VTEXT.
2001-10-29 23:06:03 +00:00
chs dcd9e4a1ee add some missing spinlocks. 2001-10-21 00:04:42 +00:00
chs 4b887dad17 it is with great chagrin that I must fix yet another 64-bit math bug. 2001-10-16 05:56:23 +00:00
chs 1c97701b8b fix an uninitialized-variable problem in an error case.
pointed out by Simon Burge.
2001-10-15 00:37:51 +00:00
christos 7e19baba28 protect against traditional macro expansion. 2001-10-03 13:32:23 +00:00
chs 3aea6d69ad skip the MADV_SEQUENTIAL processing if we refault. fixes PR 14060. 2001-10-03 05:17:58 +00:00
chs 0c3dfee2f8 skip the swap-out code if there's no swap space configured.
avoid some hangs in low-memory situations.
2001-09-30 02:57:34 +00:00
chs 80373b7e54 don't depend on other headers to include sys/proc.h for us. 2001-09-28 11:59:51 +00:00
chs 365f4c4313 change the names of the arguments to uvn_put() to match their usage. 2001-09-26 07:23:51 +00:00
chs e37c6bf037 move call to pool_drain() outside the pageq lock. 2001-09-26 07:08:41 +00:00
chs a467bddfdc bump the rusage counter for "swaps" when we swap out a process.
addresses PR 6170.
2001-09-23 07:10:08 +00:00
chs 2adcba997b make pmap_resident_count() non-optional. 2001-09-23 06:35:30 +00:00
sommerfeld cc8633edd3 VOP_PUTPAGES must release the uobj's lock for us, so ensure it's locked
beforehand and unlocked afterwards using LOCK_ASSERT().
2001-09-22 22:33:16 +00:00
jdolecek 8573719e3d add new UVM_LOAN_WIRED flag - the memory pages loaned in TOPAGE case
are only wired if this flag is present (i.e. they are not wired by default now)
loaned pages are unloaned via new uvm_unloan(), uvm_unloananon() and
uvm_unloanpage() are no longer exported
adjust uvm_unloanpage() to unwire the pages if UVM_LOAN_WIRED is specified
mark uvm_loanuobj() and uvm_loanzero() static also in function implementation

kern/sys_pipe.c: uvm_unloanpage() --> uvm_unloan()
2001-09-22 05:58:04 +00:00
chs a548bfb584 add an assert. 2001-09-21 07:57:35 +00:00
chs 20a658f0ab work around swap-space/extent performance problem which causes
long pauses when processes with lots of swapped-out pages exit.
2001-09-19 03:41:46 +00:00
chs 64c6d1d2dc a whole bunch of changes to improve performance and robustness under load:
- remove special treatment of pager_map mappings in pmaps.  this is
   required now, since I've removed the globals that expose the address range.
   pager_map now uses pmap_kenter_pa() instead of pmap_enter(), so there's
   no longer any need to special-case it.
 - eliminate struct uvm_vnode by moving its fields into struct vnode.
 - rewrite the pageout path.  the pager is now responsible for handling the
   high-level requests instead of only getting control after a bunch of work
   has already been done on its behalf.  this will allow us to UBCify LFS,
   which needs tighter control over its pages than other filesystems do.
   writing a page to disk no longer requires making it read-only, which
   allows us to write wired pages without causing all kinds of havoc.
 - use a new PG_PAGEOUT flag to indicate that a page should be freed
   on behalf of the pagedaemon when it's unlocked.  this flag is very similar
   to PG_RELEASED, but unlike PG_RELEASED, PG_PAGEOUT can be cleared if the
   pageout fails due to eg. an indirect-block buffer being locked.
   this allows us to remove the "version" field from struct vm_page,
   and together with shrinking "loan_count" from 32 bits to 16,
   struct vm_page is now 4 bytes smaller.
 - no longer use PG_RELEASED for swap-backed pages.  if the page is busy
   because it's being paged out, we can't release the swap slot to be
   reallocated until that write is complete, but unlike with vnodes we
   don't keep a count of in-progress writes so there's no good way to
   know when the write is done.  instead, when we need to free a busy
   swap-backed page, just sleep until we can get it busy ourselves.
 - implement a fast-path for extending writes which allows us to avoid
   zeroing new pages.  this substantially reduces cpu usage.
 - encapsulate the data used by the genfs code in a struct genfs_node,
   which must be the first element of the filesystem-specific vnode data
   for filesystems which use genfs_{get,put}pages().
 - eliminate many of the UVM pagerops, since they aren't needed anymore
   now that the pager "put" operation is a higher-level operation.
 - enhance the genfs code to allow NFS to use the genfs_{get,put}pages
   instead of a modified copy.
 - clean up struct vnode by removing all the fields that used to be used by
   the vfs_cluster.c code (which we don't use anymore with UBC).
 - remove kmem_object and mb_object since they were useless.
   instead of allocating pages to these objects, we now just allocate
   pages with no object.  such pages are mapped in the kernel until they
   are freed, so we can use the mapping to find the page to free it.
   this allows us to remove splvm() protection in several places.

The sum of all these changes improves write throughput on my
decstation 5000/200 to within 1% of the rate of NetBSD 1.5
and reduces the elapsed time for "make release" of a NetBSD 1.5
source tree on my 128MB pc to 10% less than a 1.5 kernel took.
2001-09-15 20:36:31 +00:00
chris 0e7661f023 Update pmap_update to now take the updated pmap as an argument.
This will allow improvements to the pmaps so that they can more easily defer expensive operations, eg tlb/cache flush, til the last possible moment.

Currently this is a no-op on most platforms, so they should see no difference.

Reviewed by Jason.
2001-09-10 21:19:08 +00:00
chs 2133049a7c create a new pool for map entries, allocated from kmem_map instead of
kernel_map.  use this instead of the static map entries when allocating
map entries for kernel_map.  this greatly reduces the number of static
map entries used and should eliminate the problems with running out.
2001-09-09 19:38:22 +00:00
lukem 53156d96d0 let user know current value of MAX_KMAPENT in panic 2001-09-07 00:50:54 +00:00
chuck 2dec1a929d handle a locking problem where the second (or later) call in the loanentry
loop returns 0.   loanentry was returning >0, but was unlocking the maps
(because of the zero).   reworked to avoid this.  problem reported by
chuck silvers.   also clarify a comment that jdolecek asked about.
2001-08-27 02:34:29 +00:00
chs a65671c2a9 don't mess with vnode holds or buffer lists for swap i/os.
fixes problems with leaked vnode holds.
2001-08-26 00:43:53 +00:00
chs ed1e153702 use the correct symbol for multi-include protection. 2001-08-25 20:37:46 +00:00
wiz c52d355d71 "wierd" is weird. 2001-08-20 12:20:01 +00:00
chs 2233d77cac when fetching an object page to loan out, do so synchronously. 2001-08-18 05:51:44 +00:00
chs 2c441082d4 allow mappings of VBLK vnodes. 2001-08-17 05:53:02 +00:00
chs 37f6c5155d call VOP_MMAP() before allowing mappings of vnodes to allow
filesystems which do not support memory mapped access to cause
mmap() of their vnodes to fail.
2001-08-17 05:52:46 +00:00
chs e9fbc91f95 user maps are always pageable. 2001-08-16 01:37:50 +00:00
matt cce919e025 Don't include <machine/pmap.h> and <machine/vmparam.h> if _KERNEL isn't
defined.  Include them explicitly in the few kvm_arch.c that need them.
2001-08-05 03:33:15 +00:00
thorpej ae69446f0f Back out previous -- christos needs to update his lint(1). 2001-07-25 23:05:04 +00:00
christos 8228035453 fix non-portable bitmap warning. 2001-07-25 22:41:10 +00:00
wiz a9356936b4 seperate -> separate 2001-07-22 13:33:58 +00:00
thorpej cbf41a143a bzero -> memset 2001-07-18 16:43:09 +00:00
matt f300898396 Add support for kern.maxphys, vm.maxslp, vm.uspace (the later two for ps). 2001-07-14 06:36:01 +00:00
thorpej c3cd2c3cfc Rather than using u_shorts, use u_ints and bitfields in the vm_page. This
provides us more flexibility with pageq-locked fields, and clarifies the
locking semantics for platforms which cannot address shorts.

From Ross Harvey.
2001-06-28 00:26:38 +00:00
thorpej 8217ab697c Since a page can be on only one of ACTIVE or INACTIVE queues at
any given time, turn two consecutive if statements into an if-else-if
construct.
2001-06-27 23:57:16 +00:00
thorpej 27f66d3bf7 Macro'ize the code that checks the free and inactive thresholds and
wakes the pagedaemon.
2001-06-27 21:18:34 +00:00
thorpej 9917ae5683 G/c a comment that no longer applies. 2001-06-27 18:52:10 +00:00
thorpej a279b0973b Reduce some complexity in the fault path -- Rather than maintaining
an spl-protected "interrupt safe map" list, simply require that callers
of uvm_fault() never call us in interrupt context (MD code must make
the assertion), and check for interrupt-safe maps in uvmfault_lookup()
before we lock the map.
2001-06-26 17:55:14 +00:00
thorpej 78ae4127bb Note that uvm_fault() must NEVER EVER EVER be called in interrupt
context.
2001-06-26 17:27:31 +00:00
chs 2d06d7932a don't for memory in uao_set_swlot() since we're holding spinlocks,
instead return -1.  adjust callers to handle this new error return.
fixes PR 13194.
2001-06-23 20:52:03 +00:00
chs 88cc5dd4b8 clean up the transient error case in uvm_pager_put(). 2001-06-23 20:47:44 +00:00
chs 58079906be don't use the list pointers after we take an object off its list. 2001-06-22 06:20:24 +00:00
thorpej 80cc38a1af Fix a partial construction problem that can cause race conditions
between creation of a file descriptor and close(2) when using kernel
assisted threads.  What we do is stick descriptors in the table, but
mark them as "larval".  This causes essentially everything to treat
it as a non-existent descriptor, except for fdalloc(), which sees a
filled slot so that it won't (incorrectly) allocate it again.  When
a descriptor is fully constructed, the code that has constructed it
marks it as "mature" (which actually clears the "larval" flag), and
things continue to work as normal.

While here, gather all the code that gets a descriptor from the table
into a fd_getfile() function, and call it, rather than having the
same (sometimes incorrect) code copied all over the place.
2001-06-14 20:32:41 +00:00
chs 7e00a527ea work around an overflow problem in uvm_fault_wire().
from Eduardo Horvath and Simon Burge.
2001-06-14 05:12:56 +00:00
simonb 85ded6700b Add a sanity check for ubc_winshift. 2001-06-13 06:06:19 +00:00
mrg b0b1999665 uvm_coredump32() moved into compat/netbsd32. 2001-06-06 21:28:51 +00:00
chs 821ec03ed9 replace vm_map{,_entry}_t with struct vm_map{,_entry} *. 2001-06-02 18:09:08 +00:00
lukem d84d2c6c85 add missing #include "opt_kgdb.h" 2001-05-30 15:24:23 +00:00
mrg 67afbd6270 use _KERNEL_OPT 2001-05-30 11:57:16 +00:00
chs 11a9651c8f replace vm_page_t with struct vm_page *. 2001-05-26 21:27:10 +00:00
chs 118ddca24a replace {simple_,}lock{_data,}_t with struct {simple,}lock {,*}. 2001-05-26 16:32:40 +00:00
chs 3845302904 remove trailing whitespace. 2001-05-25 04:06:11 +00:00
ross 892627dd05 Merge the swap-backed and object-backed inactive lists. 2001-05-22 00:44:44 +00:00
ross d8840def52 Expand on the locking notes comment with a XXX warning about u_short fields. 2001-05-16 00:16:01 +00:00
ross 91646d1aa5 Eliminate lhs cast (incorrectly accepted by gcc) 2001-05-15 09:04:00 +00:00
thorpej e2a791df22 Use pool_init() rather than pool_create(). 2001-05-09 23:20:59 +00:00
fvdl defa9bf05f Avoid potential cases of sleeping while holding a spinlock. Pay attention
to SWF_FAKE when finding a swap device. GC swapdrum_add; it was only
a few lines long and called once, so just inline the code there.
2001-05-09 19:21:02 +00:00
thorpej 0b8c6fcc77 Fix a silly mistake I made when reworking the uvm inactive list
some time ago.  The mistake was to check that the page was not
referenced since the last active scan before moving it to inactive.
Now we just clear reference and move it to inacive (which is where
the second clock hand sweep occurs).
2001-05-07 22:01:28 +00:00
thorpej 04f36fcb9e Remove a comment which is no longer true. From Artur Grabowski. 2001-05-06 20:12:09 +00:00
ross 6b9d94cd8c Fix overflow errors in brk(2). 2001-05-06 04:32:08 +00:00
thorpej 31fafb678f Support dynamic sizing of the page color bins. We also support
dynamically re-coloring pages; as machine-dependent code discovers
the size of the system's caches, it may call uvm_page_recolor() with
the new number of colors to use.  If the new mumber of colors is
smaller (or equal to) the current number of colors, then uvm_page_recolor()
is a no-op.

The system defaults to one bucket if machine-dependent code does not
initialize uvmexp.ncolors before uvm_page_init() is called.

Note that the number of color bins should be initialized to something
reasonable as early as possible -- for many early memory allocations,
we live with the consequences of the page choice for the lifetime of
the boot.
2001-05-02 01:22:19 +00:00
thorpej 01e2971ba2 Add the number of page colors to uvmexp. 2001-05-01 19:36:56 +00:00
enami 1132ef7f20 Use simple do {} while () loop instead of for {} loop + extra test/variable. 2001-05-01 14:02:56 +00:00
enami d211385f8a Fix second level indentation in recent commit. 2001-05-01 13:42:34 +00:00
thorpej 220bcf69ac Garbage-collect a comment that has not been applicable since Mach. 2001-05-01 03:01:18 +00:00
thorpej cf67ac7122 Per discussion w/ chuck and chuck, restructure the md page stuff
to use a structure called "vm_page_md", and use __HAVE_VM_PAGE_MD
and __HAVE_PMAP_PHYSSEG.
2001-05-01 02:19:13 +00:00
thorpej 2b27ac7a99 Add a VM_MDPAGE_MEMBERS macro that defines pmap-specific data for
each vm_page structure.  Add a VM_MDPAGE_INIT() macro to init this
data when pages are initialized by UVM.  These macros are mandatory,
but ports may #define them to nothing if they are not needed/used.

This deprecates struct pmap_physseg.  As a transitional measure,
allow a port to #define PMAP_PHYSSEG so that it can continue to
use it until its pmap is converted to use VM_MDPAGE_MEMBERS.

Use all this stuff to eliminate a lot of extra work in the Alpha
pmap module (it's smaller and faster now).  Changes to other pmap
modules will follow.
2001-04-29 22:44:31 +00:00
thorpej cda7baa0d5 Implement page coloring, using a round-robin bucket selection
algorithm (Solaris calls this "Bin Hopping").

This implementation currently relies on MD code to define a
constant defining the number of buckets.  This will change
reasonably soon (MD code will be able to dynamically size
the bucket array).
2001-04-29 04:23:20 +00:00
marcus d317b08ca6 STDC cleanup: extra token not allowed after #endif. 2001-04-27 00:14:47 +00:00
thorpej 93b0af8f60 pmap_resident_count() always exists. Besides, returning the
value of vm_rssize is pointless -- it is never initialized to
anything other than 0.
2001-04-25 18:09:52 +00:00
thorpej 773ed79e5b Add a comment describing a problem. 2001-04-25 14:59:44 +00:00
thorpej 1c3a62e066 Sprinkle pmap_update() calls after calls to:
- pmap_enter()
- pmap_remove()
- pmap_protect()
- pmap_kenter_pa()
- pmap_kremove()
as described in pmap(9).

These calls are relatively conservative.  It may be possible to
optimize these a little more.
2001-04-24 04:30:50 +00:00
thorpej 0e325bb097 Some spring cleaning. 2001-04-24 00:19:00 +00:00
thorpej 55044638aa Remove pmap_kenter_pgs(). It was never really adopted by
anything, and the interface itself wasn't as flexible as
callers would have probably liked.
2001-04-22 23:42:11 +00:00
thorpej 69abdbf60c Undo a misguided previous change to the pmap_update() API. 2001-04-22 23:19:26 +00:00
thorpej cfb5c7ed9f Make pmap_virtual_space() a required pmap function, even on platforms
which have pmap_steal_memory().  This is to reduce the API differences
between pmaps that implement pmap_steal_memory() and pmaps which do
not.

Note that pmap_steal_memory() needs to adjust *vstartp and/or
*vendp only if it used addresses within the range provided to UVM
via the pmap_virtual_space() call.  I.e. it is not necessary to do
so in any current pmap_steal_memory() implementation.
2001-04-22 17:22:57 +00:00
thorpej 4738622712 Give pmap_update() an argument (a pmap_t) so that it knows which
pmap it should be updating.
2001-04-22 00:33:59 +00:00
thorpej 0115ec662c The pmap_update() call at the end of uvm_swapout_threads() is
completely useless.  Nuke it.
2001-04-21 17:38:24 +00:00
thorpej 9e0af4a217 Add a __predict_true() to an extremely common case. 2001-04-12 21:11:47 +00:00
thorpej eed75ba69e In uvm_km_kmemalloc(), use the correct size for the uvm_unmap()
call if the allocation fails.

Problem pointed out by Alfred Perlstein <bright@wintelcom.net>,
who found a similar bug in FreeBSD.
2001-04-12 21:08:25 +00:00
chuck 7074958a24 fix locking problem noted by Jaromir Dolecek. also, add more comments
on locking rules to make code easier to understand.   locking in
uvm_loananon still needs some work on fringe cases where anon's page
is actually on loan from a uobj.
2001-04-10 00:53:21 +00:00
jdolecek 5bd42953f7 Upon Chuck Cranor request, revert rev. 1.26. There is indeed a bug in way
locking is done, but this fix is not the right way to fix it.
2001-04-09 06:21:03 +00:00
jdolecek 17c0a84170 Remove superflous uvmfault_unlockmaps() in uvm_loan(), only call it
if uvm_loanentry() returned 0; otherwise, the unlocking would already
have been done by uvmfault_unlockall() call in  uvm_loanentry().
Okay'ed by Chuck Silvers
2001-04-08 16:51:51 +00:00
chs 088989a557 undo the part of a previous commit which turned a check for faulting
on an "intrsafe" map into a KASSERT.  this situation can be caused by
an application accessing /dev/kmem.
2001-04-01 16:45:53 +00:00
chs 11fe9ca446 use ubc_winshift instead of ubc_winsize in pmaps to set up kernel
virtual space.  the latter isn't initialized yet when the value is needed.
fixes PR 12440.
2001-03-21 03:16:05 +00:00
simonb d618ec62ad In sys_obreak(), the return value of atop() was being used to change
the process dsize for both positive and negative changes.  Since atop()
casts its result to a paddr_t (which is unsigned), negative changes in
process data size resulted in unrealistic dsizes being set.  Use
"dsize -= atop(-diff)" for a negative diffs.  Fixes the "Impossible
process sizes" mentioned on current-users.

Unsigned cast catch and much debugging help from Martin Laubach.
2001-03-19 02:25:33 +00:00
chs c40daf0aed change uvm_winsize to uvm_winshift so that we can avoid division
by a non-constant value.
2001-03-19 00:29:03 +00:00
chs edb041f0d1 return the real error from pgo_fault(). 2001-03-17 04:01:24 +00:00
chs 19accb3d77 return the real error from VOP_GETPAGES(). 2001-03-17 04:01:02 +00:00
chs ac3bc537bd eliminate the KERN_* error codes in favor of the traditional E* codes.
the mapping is:

KERN_SUCCESS			0
KERN_INVALID_ADDRESS		EFAULT
KERN_PROTECTION_FAILURE		EACCES
KERN_NO_SPACE			ENOMEM
KERN_INVALID_ARGUMENT		EINVAL
KERN_FAILURE			various, mostly turn into KASSERTs
KERN_RESOURCE_SHORTAGE		ENOMEM
KERN_NOT_RECEIVER		<unused>
KERN_NO_ACCESS			<unused>
KERN_PAGES_LOCKED		<unused>
2001-03-15 06:10:32 +00:00