Commit Graph

621 Commits

Author SHA1 Message Date
enami 79dbb12278 When shrinking file size, don't dispose of a page still in use. 2001-02-22 01:02:09 +00:00
chs 19b7b64642 clean up DIAGNOSTIC checks, use KASSERT(). 2001-02-18 21:19:08 +00:00
chs 7b76ca8254 in uvn_flush(), add a fast path for the case where the vnode has no pages.
update the comment above this function while I'm here.
2001-02-18 19:40:25 +00:00
chs 4808c1dfb5 in uvm_aio_aiodone(), don't mark the page(s) clean if the pageout
failed because we failed to acquire some resource needed to initiate
the pageout (such as failing to lock an indirect buffer) rather than
a hard i/o error.  in this case we just want to reactivate the page(s)
so that we'll try to write them again later.

while I'm here, clean up some DIAGNOSTIC code.
2001-02-18 19:26:50 +00:00
pk dca7b5b472 SWAP_DUMPDEV,SWAP_OFF cases: make sure to release the vnode being operated on. 2001-02-12 11:50:50 +00:00
eeh 4589ac3292 When recycling a vm_map, resize it to the new process address space limits. 2001-02-11 01:34:23 +00:00
thorpej b016744976 Don't uvm_deallocate() the address space in exit1(). The address
space is already torn down in uvmspace_free() when the vmspace
refrence count reaches 0.  Move the shmexit() call into uvmspace_free().

Note that there is a beneficial side-effect of deferring the unmap
to uvmspace_free() -- on systems where TLB invalidations are
particularly expensive, the unmapping of the address space won't
have to cause TLB invalidations; uvmspace_free() is going to be
run in a context other than the exiting process's, so the "pmap is
active" test will evaluate to FALSE in the pmap module.
2001-02-10 05:05:27 +00:00
chs 4be5f47040 remove a debug printf() that has outlived its usefulness. 2001-02-08 06:43:05 +00:00
eeh ec22628573 Move maxdmap and maxsmap where they belong and make them big enough. 2001-02-06 19:54:43 +00:00
eeh 4380259bc7 Specify a process' address space limits for uvmspace_exec(). 2001-02-06 17:01:51 +00:00
chs 43eb344e3f in uvn_flush(), interpret a "stop" value of 0 as meaning all pages at
offsets equal to or higher than "start".  use this in uvm_vnp_setsize()
instead of the vnode's size since there can be pages past EOF.
2001-02-06 10:53:23 +00:00
chs 4d5451090e in uvm_map_clean(), fix the case where the start offset is within the last
entry in the map.  the old code would walk around the end of the linked list,
through the header entry, and keep going from the first map entry until it
found a gap in the map, at which point it would return an error.  if the map
had no gaps then it would loop forever.  reported by k-abe@cs.utah.edu.
while I'm here, clean up this function a bit.

also, use MIN() instead of min(), since the latter takes arguments of
type "int" but we're passing it values of type "vaddr_t", which can be
a larger size.
2001-02-05 11:29:54 +00:00
mrg 6e26ebea51 allow ubchist to be printed from the uvmhist merging uvm_hist() 2001-02-04 10:55:58 +00:00
mrg 0c151f32c8 add a KASSERT(pp) in the uvm_pagermapin() loop. 2001-02-04 10:55:12 +00:00
enami dda0d50982 Explicitly panic if failed to allocate some memory during initialization. 2001-02-02 01:55:52 +00:00
thorpej 1779f8f71b Page scanner improvements, behavior is actually a bit more like
Mach VM's now.  Specific changes:
- Pages now need not have all of their mappings removed before being
  put on the inactive list.  They only need to have the "referenced"
  attribute cleared.  This makes putting pages onto the inactive list
  much more efficient.  In order to eliminate redundant clearings of
  "refrenced", callers of uvm_pagedeactivate() must now do this
  themselves.
- When checking the "modified" attribute for a page (for clearing
  PG_CLEAN), make sure to only do it if PG_CLEAN is currently set on
  the page (saves a potentially expensive pmap operation).
- When scanning the inactive list, if a page is referenced, reactivate
  it (this part was actually added in uvm_pdaemon.c,v 1.27).  This
  now works properly now that pages on the inactive list are allowed to
  have mappings.
- When scanning the inactive list and considering a page for freeing,
  remove all mappings, and then check the "modified" attribute if the
  page is marked PG_CLEAN.
- When scanning the active list, if the page was referenced since its
  last sweep by the scanner, don't deactivate it.  (This part was
  actually added in uvm_pdaemon.c,v 1.28.)

These changes greatly improve interactive performance during
moderate to high memory and I/O load.
2001-01-28 23:30:42 +00:00
thorpej 1cdff48674 Put the extern decl of uvm_vnodeops in uvm_object.h 2001-01-28 22:23:04 +00:00
thorpej 5849b86934 Use UVM_OBJ_IS_VNODE(). 2001-01-28 22:14:52 +00:00
thorpej d1c3f6bab3 Define a UVM_OBJ_IS_VNODE() macro to test if an object is a vnode. 2001-01-28 22:14:28 +00:00
thorpej 37247109d1 When considering a page for deactivation, check to see if the
page has been referenced since the last time it was considered.
If it was, don't deactivate the page.
2001-01-25 00:24:48 +00:00
mycroft 91a4c18e32 Put back the pmap_is_referenced() check from the original UVM code in the
inactive list scans.  Without this, the referenced bit was essentially ignored.
2001-01-25 00:10:03 +00:00
thorpej ad7259d7c6 Change uvm_analloc() to return a locked anon, update all callers,
and fix an anon locking protocol error in uvm_loanzero().
2001-01-23 02:27:39 +00:00
thorpej 13759f5310 Sprinkle some assertions:
amap_free(): Assert that the amap is locked.
amap_share_protect(): Assert that the amap is locked.
amap_wipeout(): Assert that the amap is locked.
uvm_anfree(): Assert that the anon has a reference count of 0 and is
              not locked.
uvm_anon_lockloanpg(): Assert that the anon is locked.
anon_pagein(): Assert that the anon is locked.
uvmfault_anonget(): Assert that the anon is locked.
uvm_pagealloc_strat(): Assert that the uobj or the anon is locked

And fix the problems these have uncovered:
amap_cow_now(): Lock the new anon after allocating it, and unref and
                unlock it (rather than lock!) before freeing it in case
                of an error condition.  This should fix a problem reported
		by Dan Carosone using cdrecord on an i386 MP kernel.
uvm_fault(): Case1B -- Lock the new anon afer allocating it, and unlock
             it later when we unlock the old anon.
	     Case2 -- Lock the new anon after allocating it, and unlock
	     it later by passing it to uvmfault_unlockall() (we set anon
	     to NULL if we're not doing a promote fault).
2001-01-23 01:56:16 +00:00
thorpej f4395a4eae splimp() -> splvm() 2001-01-14 02:10:01 +00:00
pk f134ba4486 atop(): cast argument to `paddr_t' (instead of `u_long') to avoid
truncating the address.
2001-01-09 13:55:20 +00:00
chs f0ff6fc897 in uvn_flush(), when PGO_SYNCIO is specified then we should wait for
pending i/os to complete before returning even if PGO_CLEANIT is not
specified.  this fixes two races:

 (1) NFS write rpcs vs. setattr operations which truncate the file.
     if the truncate doesn't wait for pending writes to complete then
     a later write rpc completion can undo the effect of the truncate.
     this problem has been reported by several people.

 (2) write i/os in disk-based filesystem vs. the disk block being
     freed by a truncation, allocated to a new file, and written
     again with different data.  if the disk driver reorders the requests
     and does the second i/o first, the old data will clobber the new,
     corrupting the new file.  I haven't heard of anyone experiencing
     this problem yet, but it's fixed now anyway.
2001-01-08 06:21:13 +00:00
thorpej 4d4b2b5626 Nevermind that it's silly to include PROT_EXEC even if a vnode
doesn't have the exec bit set, we need to have PROT_EXEC set
in order for some expected mmap/mprotect behavior to work, so
do the last bit slightly differently: if udv_attach() fails, and
the protection (NOT maxprot) doens't include PROT_EXEC, then clear
PROT_EXEC from maxprot and try udv_attach() again.

Sigh, mmap really needs to be rototilled.
2001-01-08 01:35:03 +00:00
thorpej 781516b080 Only include PROT_EXEC in maxprot if the user specified PROT_EXEC
in the mmap() call.  maxprot is used to create device mappings,
and always including PROT_EXEC causes the mapping to fail on the Alpha
when mapping a non-RAM offset of /dev/mem (which may be sparse, so
instruction fetch from there is disallowed).
2001-01-07 06:16:46 +00:00
enami f306f72978 Use cast where appropriate to avoid integer overflow. 2001-01-04 06:07:18 +00:00
chs 1e651a1688 remove some more leftovers from Mach. 2000-12-28 08:24:55 +00:00
chs 89b005fc27 when we fail to allocate anons to represent new swap space,
just return an error rather than panicing.
2000-12-27 09:17:04 +00:00
chs 910b4f2e20 fix some types so that files larger than 4GB work. 2000-12-27 09:01:45 +00:00
chs de569051ad VOP_GETPAGES() returns an E* error code, not a VM_PAGER_* error code. 2000-12-27 04:44:42 +00:00
enami 6ff137de16 Place a name of extent in a struct swapdev instead of dynamically
allocating it.
2000-12-23 12:13:05 +00:00
enami 4e59adc1bb s/UBC_WINSIZE/ubc_winsize/g except the variable initialization. 2000-12-21 03:37:59 +00:00
chs fc03073896 expose the tunables ubc_nwins and ubc_winsize in uvm_param.h.
add the space used by UBC mappings to the initial PTE calculations
for pmaps that do that (mips and alpha).
2000-12-21 00:52:01 +00:00
chs 34a059b354 in uvn_flush(), don't deactivate busy pages. 2000-12-16 06:17:09 +00:00
chs cf25b3fa04 continue processing the inactive queue past the free target when
we're enforcing the limit on the number of vnode pages.
2000-12-13 17:03:32 +00:00
enami 4625dcde2e Use single const char array instead of over 200 string constant. 2000-12-13 08:06:11 +00:00
chs 837f5c9bd6 we don't need VM_PROT_EXECUTE for UBC mappings. 2000-12-10 19:28:09 +00:00
chs cae7ac2e3a in uvm_pagermapin(), for now, don't pass the flag to pmap_enter()
which presets the page modified bit if the page is already initialized.
we don't actually want to modify such pages.
2000-12-09 23:26:27 +00:00
chs a8609aaac8 in uvn_findpage(), only increment the counter of vnode pages
if we succeed in allocating a page.

from Lars Heidieker <lars@heidieker.de> in PR 11636.
2000-12-06 03:37:30 +00:00
chs eeabe3f90d make sure that pages are on an paging queue before unlocking them. 2000-12-01 09:54:42 +00:00
chs 024f8bed4a add new uvmexp fields for uvmexp_print(). 2000-12-01 09:48:56 +00:00
simonb 33999a4224 Move uvm_pgcnt_vnode and uvm_pgcnt_anon into uvmexp (as vnodepages and
anonpages), and add vtextpages which is currently unused but will be
used to trace the number of pages used by vtext vnodes.
2000-11-30 11:04:43 +00:00
simonb 3c1b8a2b35 Add a vm.uvmexp2 sysctl that uses a ABI-safe 'struct uvmexp_sysctl'. 2000-11-29 09:52:18 +00:00
chs e9037d16c5 allow building without SOFTDEP by adding the pageiodone hook to bio_ops. 2000-11-27 18:26:38 +00:00
chs aeda8d3b77 Initial integration of the Unified Buffer Cache project. 2000-11-27 08:39:39 +00:00
chs c29a1b4461 allow ports to override PAGER_MAP_SIZE in machine/vmparam.h.
some ports (such as arm32) don't have enough KVA for the
increased default size once the UBC mapping is also present.
2000-11-27 08:19:50 +00:00
chs 1ec37ad27a use queue.h macros and KASSERT(). 2000-11-27 07:47:42 +00:00
nisimura 10571faa84 Introduce uvm_km_valloc_align() and use it to glab process's USPACE
aligned on USPACE boundary in kernel virutal address.  It's benefitial
for MIPS R4000's paired TLB entry design.
2000-11-27 04:36:40 +00:00
chs 2ed28d2c7a lots of cleanup:
use queue.h macros and KASSERT().
address amap offsets in pages instead of bytes.
make amap_ref() and amap_unref() take an amap, offset and length
  instead of a vm_map_entry_t.
improve whitespace and comments.
2000-11-25 06:27:59 +00:00
soren 2a6c823e89 Typo in comment. 2000-11-24 23:30:01 +00:00
chs b5142d6841 increase PAGER_MAP_SIZE to 16MB and move it to uvm_pager.h
since the alpha and mips pmaps use it.
2000-11-24 22:41:38 +00:00
chs f9fb6f5a55 g/c unused pager ops "asyncget" and "aiodone". 2000-11-24 20:34:01 +00:00
chs ccbcd7c873 use queue.h macros and other misc cleanup. 2000-11-24 18:54:31 +00:00
chs 55a751c9d5 add ddb commands "show uvmexp" and "show ncache".
the former used to be "call uvm_dump", the latter is new.
2000-11-24 07:25:50 +00:00
chs 0a54af033a cleanup: use queue.h macros and KASSERT(). 2000-11-24 07:07:27 +00:00
mrg 45d83e5996 add SWAP_GETDUMPDEV command support. 2000-11-17 11:39:39 +00:00
chs 1fd1a318cb in swap_off(), reverse the order of vrele() and VOP_CLOSE() so that
devices will actually be notified if this is the last close.
this allows raidframe swap devices to be marked clean.
also, move the corresponding vref() into swap_on() for symmetry
and improve some comments.
2000-11-13 14:50:55 +00:00
christos 413d7641a1 Give a hint to the user on why we failed. 2000-11-09 19:15:28 +00:00
ad 642267bcc7 Update for hashinit() change. 2000-11-08 14:28:12 +00:00
thorpej 0a2fa5320b Back out rev. 1.83 -- it's causing problems with some pmap
implementations, so we'll have to spend a little more time
working on the problem.
2000-10-16 23:17:54 +00:00
thorpej 76589fafd4 - uvmspace_share(): If p2 has a vmspace already, make sure to deactivate
it and free it as appropriate.  Activate p2's new address space once
  it references p1's.
- uvm_fork(): Make sure the child's vmspace is NULL before calling
  uvmspace_share() (the child doens't have one already in this case).

These changes do not change the behavior for the current use of
uvmspace_share() (vfork(2)), but make it possible for an already
running process (such as a kernel thread) to properly attach to
another process's address space.
2000-10-11 17:27:58 +00:00
thorpej 47a2016cdc - Change SAVE_HINT() to take a "check" value. This value is compared
to the contents of the hint in the map, and the hint saved in the
  map only if the two values match.  When an unconditional save is
  required, the "check" value passed should be map->hint (and the
  compiler will optimize the test away).  When deleting a map entry,
  the new SAVE_HINT() will only change the hint if the entry being
  deleted was the hint value (thus preserving any meaningful hint
  that may have been there previously, rather than stomping on it).
- Add a missing hint update when deleting the map entry in
  uvm_map_entry_unlink().  This is the fix for kern/11125, from
  ITOH Yasufumi <itohy@netbsd.org>.
2000-10-11 17:21:11 +00:00
mrg 4f75145ec1 s/vm/uvm/ in a bunch of error messages. 2000-10-05 00:37:50 +00:00
mrg dd521daa8b clean up a comment. 2000-10-03 20:50:49 +00:00
eeh 1ecf6779be Add support for variable end of user stacks needed to support COMPAT_NETBSD32:
`struct vmspace' has a new field `vm_minsaddr' which is the user TOS.

	PS_STRINGS is deprecated in favor of curproc->p_pstr which is derived
	from `vm_minsaddr'.

	Bump the kernel version number.
2000-09-28 19:05:06 +00:00
enami 9308eaef21 splstatclock is insufficient to protect run queues. Acquire scheduler
lock instead.
2000-09-23 00:43:10 +00:00
thorpej b008f5f25a Make PMAP_PAGEIDLEZERO() return a boolean value. FALSE indidcates
that the page being zero'd was not completed and that page zeroing
should be aborted.  This may be used by machine-dependent code doing
slow page access to reduce the latency of running a process that has
become runnable while in the middle of doing a slow page zero.
2000-09-21 17:46:04 +00:00
thorpej 72a24b4eae Add an align argument to uvm_map() and some callers of that
routine.  Works similarly fto pmap_prefer(), but allows callers
to specify a minimum power-of-two alignment of the region.
How we ever got along without this for so long is beyond me.
2000-09-13 15:00:15 +00:00
chs 11d2a68c1e fix uvm_coredump32() just like uvm_coredump(). 2000-09-07 05:01:43 +00:00
chs db3465f65b in uvm_coredump(), avoid dumping parts of the stack multiple times
while skipping parts of the stack that hasn't been used.
pointed out by SAITOH Masanobu <masanobu@iij.ad.jp>.
2000-08-24 06:09:25 +00:00
thorpej 9bd3060650 Remove a totally unnecessary splhigh/spl0 pair. 2000-08-21 02:29:32 +00:00
bjh21 3e6dc8178c Ensure that uvmexp.freemin is above the kernel reserved-page count.
When it wasn't (which could happen on a 4Mb machine with 32kb pages),
uvm_pagealloc_strat could refuse to allocate user memory, while the pagedaemon
didn't think it was worth freeing any more, resulting in the system seizing up.
2000-08-20 10:24:14 +00:00
thorpej 1b0163bd50 Garbage-collect a constant that nothing uses. 2000-08-16 16:32:06 +00:00
thorpej a91e7a7c6d Don't bother with a trampoline to start the pagedaemon and
reaper threads.
2000-08-12 22:41:53 +00:00
sommerfeld a7449460ec add comment warning about possible unlock/sleep race 2000-08-12 17:46:25 +00:00
sommerfeld 41c6473b10 Use ltsleep in a loop instead of simple_unlock/tsleep/goto try_again 2000-08-12 17:44:02 +00:00
thorpej 6eb78dcb4e Update a comment in uvmfault_anonget() to reflect reality, and
make uvm_fault() handle uvmfault_anonget() failure properly (i.e.
don't unlock a lock that's already unlocked).
2000-08-06 00:22:53 +00:00
thorpej 93a5993242 Do something sane with a DIAGNOSTIC condition in an non-DIAGNOSTIC
kernel.
2000-08-06 00:21:57 +00:00
thorpej 504f88949c Correct a comment about locking wrt. uvmfault_anonget(). 2000-08-05 23:40:55 +00:00
thorpej d3a2c5d0f9 MALLOC()/FREE() are not to be used for variable size allocations. 2000-08-03 00:47:02 +00:00
thorpej d85e53c7cc MALLOC() is not to be used for variable-sized allocations. 2000-08-02 20:25:11 +00:00
thorpej a09c11603b MALLOC()/FREE() are not to be used for variable-sized allocations. 2000-08-02 20:23:23 +00:00
thorpej 72fa880bf6 Fix a fairly obvious locking error in amap_cow_now() -- the amap was
left locked upon exit from the function (how did this one slip for
so long?)
2000-08-02 19:24:29 +00:00
wiz be8ff811b7 Rename VM_INHERIT_* to MAP_INHERIT_* and move them to sys/sys/mman.h as
discussed on tech-kern.
Retire sys/uvm/uvm_inherit.h, update man page for minherit(2).
2000-08-01 00:53:07 +00:00
jeffs 05dfa34e69 Add uvm_km_valloc_prefer_wait(). Used to valloc with the passed in
voff_t being passed to PMAP_PREFER(), which results in the propper
virtual alignment of the allocated space.
2000-07-24 20:10:51 +00:00
mrg 5b2b68bb7a fix a cast for sparc64. 2000-07-10 13:37:00 +00:00
thorpej 1079b4c0ac - Avoid an integer overflow when checking if we have exceeded our
rlimit in sbrk.  Slightly modified from a patch from Artur Grabowski.
- Rearrange code slightly, partially from Artur Grabowski.
- Only adjust vm_dsize if the grow or shrink actually succeeds.
2000-07-02 17:40:08 +00:00
mrg dea44a9ec4 remove include of <vm/vm.h> 2000-06-27 17:29:17 +00:00
mrg 1f4b948b2f move the contents of <vm/vm.h> into <uvm/uvm_extern.h>. <vm/vm.h> is simply
an include of <uvm/uvm_extern.h> now.
2000-06-27 16:16:43 +00:00
mrg 88adda1288 more vm header file changes:
<vm/vm_extern.h> merged into <uvm/uvm_extern.h>
	<vm/vm_page.h> merged into <uvm/uvm_page.h>
	<vm/pmap.h> has become <uvm/uvm_pmap.h>

this leaves just <vm/vm.h> in NetBSD.
2000-06-27 09:00:14 +00:00
mrg cd9f783cb9 install uvm_pmap.h 2000-06-27 08:49:44 +00:00
simonb eeff58b5fd In udv_fault(), use an off_t for curr_offset so that the offset passed
to d_mmap isn't truncated on 64 bit architectures.
2000-06-27 06:14:24 +00:00
mrg 6b5536f253 restore a dropped #ifdef _KERNEL 2000-06-26 17:18:40 +00:00
mrg acdc45ce9a install uvm_param.h. 2000-06-26 17:01:34 +00:00
mrg 7665599771 <vm/vm_map.h> gets merged into <uvm/uvm_map.h> 2000-06-26 15:32:23 +00:00
mrg 4c698e84f6 <vm/vm_param.h> -> <uvm/uvm_param.h> 2000-06-26 14:58:58 +00:00
mrg 2f159a1bac remove/move more mach vm header files:
<vm/pglist.h> -> <uvm/uvm_pglist.h>
	<vm/vm_inherit.h> -> <uvm/uvm_inherit.h>
	<vm/vm_kern.h> -> into <uvm/uvm_extern.h>
	<vm/vm_object.h> -> nothing
	<vm/vm_pager.h> -> into <uvm/uvm_pager.h>

also includes a bunch of <vm/vm_page.h> include removals (due to redudancy
with <vm/vm.h>), and a scattering of other similar headers.
2000-06-26 14:20:25 +00:00
simonb 889c658b5b Change the kernel mmap interface so that the offset to map is an
"off_t" and the return value is a "paddr_t" to allow mappings
at offsets past 2^31 bytes.  Somewhat inspired by FreeBSD, which
only changed the offset to a "vm_offset_t".

Includes updates for the i386, pc532 and sh3 mmmmap from Jason Thorpe.
2000-06-26 04:55:19 +00:00
mrg f5f84f80c5 <vm/vm_prot.h> becomes <uvm/uvm_prot.h> 2000-06-25 13:37:51 +00:00
pk d3aef3ad38 uvm_detach: eliminate degenerate loop construction. 2000-06-24 21:47:28 +00:00
pk bdee69596e Insert two missing `simple_unlock()'s' in udv_detach(). 2000-06-24 21:26:16 +00:00
simonb 58d0ed9dd2 Set p->p_addr to NULL after it gets freed. 2000-06-18 05:20:27 +00:00
chs e72214422a initialize aref.ar_pageoff even if there's no amap. 2000-06-13 04:10:47 +00:00
soda d5b3fb3ce1 fix printf format mismatch, when paddr_t becomes (long long) on arc port. 2000-06-09 04:43:19 +00:00
thorpej b0afc900f5 Change UVM_UNLOCK_AND_WAIT() to use ltsleep() (it is now atomic, as
advertised).  Garbage-collect uvm_sleep().
2000-06-08 05:52:34 +00:00
pk 36a1354bc6 Change previous to use `vm_map_min(dstmap)' instead of hard-coding
VM_MIN_KERNEL_ADDRESS.
2000-06-05 07:28:56 +00:00
pk 51ff5f7cd1 Let uvm_map_extract() set the lower bound on the kernel address range
itself, in stead of having its callers do that.
2000-06-02 12:02:43 +00:00
pk bf3a6b350b Shouldn't pass garbage to uvm_map_extract(). 2000-06-02 11:47:53 +00:00
thorpej 65184f2470 Change the comment before the vm_page_zero_enable global to indicate
what it will now be used for.
2000-05-29 19:25:56 +00:00
drochner f8a6b48d66 Don't silently truncate the voff_t offset to vaddr_t when passing it to
udv_attach. Pass the whole voff_t instead and do an explicite overflow
check before it is passed to the device's mmap handler (as "int", sadly).
2000-05-28 10:21:55 +00:00
thorpej e03e9e8086 Rather than starting init and creating kthreads by forking and then
doing a cpu_set_kpc(), just pass the entry point and argument all
the way down the fork path starting with fork1().  In order to
avoid special-casing the normal fork in every cpu_fork(), MI code
passes down child_return() and the child process pointer explicitly.

This fixes a race condition on multiprocessor systems; a CPU could
grab the newly created processes (which has been placed on a run queue)
before cpu_set_kpc() would be performed.
2000-05-28 05:48:59 +00:00
thorpej a7d0570e67 First sweep at scheduler state cleanup. Collect MI scheduler
state into global and per-CPU scheduler state:

	- Global state: sched_qs (run queues), sched_whichqs (bitmap
	  of non-empty run queues), sched_slpque (sleep queues).
	  NOTE: These may collectively move into a struct schedstate
	  at some point in the future.

	- Per-CPU state, struct schedstate_percpu: spc_runtime
	  (time process on this CPU started running), spc_flags
	  (replaces struct proc's p_schedflags), and
	  spc_curpriority (usrpri of processes on this CPU).

	- Every platform must now supply a struct cpu_info and
	  a curcpu() macro.  Simplify existing cpu_info declarations
	  where appropriate.

	- All references to per-CPU scheduler state now made through
	  curcpu().  NOTE: this will likely be adjusted in the future
	  after further changes to struct proc are made.

Tested on i386 and Alpha.  Changes are mostly mechanical, but apologies
in advance if it doesn't compile on a particular platform.
2000-05-26 21:19:19 +00:00
thorpej 8964c35eca Introduce a new process state distinct from SRUN called SONPROC
which indicates that the process is actually running on a
processor.  Test against SONPROC as appropriate rather than
combinations of SRUN and curproc.  Update all context switch code
to properly set SONPROC when the process becomes the current
process on the CPU.
2000-05-26 00:36:42 +00:00
enami 332c98526a - Move the comment, which describes that calling the function
uvm_map_pageable(map, ...) implies unlocking passed map, just before the
  function call.
- If we bail out before calling the uvm_map_pageable, unlock the map
  by ourself to prevent a panic ``locking against myself''.  The panic is,
  for example, caused when cdrecord is invoked with too large fifo size.
2000-05-23 02:19:20 +00:00
thorpej 1410091b4e Clean up a comment. 2000-05-20 19:54:01 +00:00
thorpej cd737c4016 Remove VM_PROT_EXECUTE from the permissions used to map the page
for pager I/O -- it is not needed, and including it leads to
unnecessary I-cache flushes.
2000-05-20 03:36:06 +00:00
thorpej 646555bbd5 Clean up some indentation lossage in uvm_map_extract(). 2000-05-19 17:43:55 +00:00
thorpej f636538446 NULL != 0 2000-05-19 04:34:39 +00:00
thorpej 655b21e17d Tell uvm_pagermapin() the direction of the I/O so that it can map
with only the protection that it needs.
2000-05-19 03:45:04 +00:00
thorpej f3b078d268 __predict_false() an error check. 2000-05-08 23:13:42 +00:00
thorpej d45c9982df __predict_false() DIAGNOSTIC error checks. 2000-05-08 23:11:53 +00:00
thorpej 39294d89e5 __predict_false() out-of-resource conditions and DIAGNOSTIC error checks. 2000-05-08 23:10:20 +00:00
thorpej b5b82faa4a uvm_map_setup(): We almost ever set up an interrupt-safe map, but we
set up quite a few regular ones (at every fork!), so put interrupt-
safe map setup in the slow path with a __predict_false().

uvm_map_reference(): __predict_false() the check for NULL map.
uvm_map_deallocate(): Likewise.
2000-05-08 22:59:35 +00:00
thorpej 9ec517a68e Changes necessary to implement pre-zero'ing of pages in the idle loop:
- Make page free lists have two actual queues: known-zero pages and
  pages with unknown contents.
- Implement uvm_pageidlezero().  This function attempts to zero up to
  the target number of pages until the target has been reached (currently
  target is `all free pages') or until whichqs becomes non-zero (indicating
  that a process is ready to run).
- Define a new hook for the pmap module for pre-zero'ing pages.  This is
  used to zero the pages using uncached access.  This allows us to zero
  as many pages as we want without polluting the cache.

In order to use this feature, each platform must add the appropropriate
glue in their idle loop.
2000-04-24 17:12:00 +00:00
chs d444bb4032 undo rev 1.13, which is to say, don't block interrupts while deactivating
one pmap and activating another.  this isn't actually necessary (since
pmap_activate() and pmap_deactivate() affect only user-level mappings,
which cannot be accessed from interrupts anyway), and pmap_activate()
is very slow on old sun4c sparcs so we can't block interrupts for this long.
this fixes PR 8322.
2000-04-16 20:52:29 +00:00
mrg 6b7f13609a remove <vm/vm_swap.h> and <vm/vm_conf.h> 2000-04-15 18:08:12 +00:00
pk 741c324930 Finish previous. 2000-04-11 08:12:14 +00:00
chs 8724ec3b5c avoid declarating "i" as a local variable in a macro.
it's too easy to shadow another local.
2000-04-11 02:34:19 +00:00
chs 66014d2dff sparc -> __sparc__
print lock status in uvm_object_printit().
2000-04-10 02:21:26 +00:00
chs 061ecbff46 tidy. 2000-04-10 02:20:06 +00:00
thorpej eeb3a38cfc Use UVM_PGA_ZERO in the promote-zero-fault case of uvm_fault(). 2000-04-10 01:17:41 +00:00
thorpej 345b3d2136 Use UVM_PGA_ZERO in a few (easy) places. 2000-04-10 00:32:46 +00:00
thorpej 2c48131727 Add UVM_PGA_ZERO which instructs uvm_pagealloc{,_strat}() to return a
zero'd, ! PG_CLEAN page, as if it were uvm_pagezero()'d.
2000-04-10 00:28:05 +00:00
chs d75d6fb164 restore a brelvp() that I removed in a moment of overzealousness.
Debugged by:  Brian Grayson <bgrayson@netbsd.org>
2000-04-07 08:27:28 +00:00
chs 60a7a67f71 remove uvm_shareprot(). no longer needed since the demise of share maps. 2000-04-03 08:09:02 +00:00
chs 03a4ef3a79 remove the "shareprot" pagerop. it's not needed anymore since
share maps are long gone.
2000-04-03 07:35:23 +00:00
thorpej 2bc5adb20e Instead of checking vm_physmem[<physseg>].pgs to determine if
uvm_page_init() has completed, add a boolean uvm.page_init_done,
and test against that.  Use this same boolean (rather than
pmap_initialized) in pmap_growkernel() to determine if we are
being called via uvm_page_init() to grow the kernel address space.

This fixes a problem on some i386 configurations where pmap_init()
itself was needing to have the kernel page table grown, and since
pmap_initialized was not yet set to TRUE, pmap_growkernel() was
choosing the wrong code path.

Fix tested by Havard Eidnes.
2000-04-02 20:39:14 +00:00
augustss 641df97d12 Remove more register declarations. 2000-03-30 12:31:50 +00:00
simonb 171910636e Delete redundant decl of aobj_pager - it's in <uvm/uvm_aobj.h>. 2000-03-30 02:49:55 +00:00
simonb b2a7dc176d Remove redundant decl for uvmspace_fork() - it's in <uvm/uvm_extern.h>. 2000-03-29 04:05:47 +00:00
simonb 0fd09c8496 Don't need to include <sys/conf.h> here. 2000-03-29 03:43:33 +00:00
kleink 7e35a43e67 In mmap(), bail out with EOVERFLOW when mapping a regular file and the file
offset plus mapping length cannot be represented in an off_t.
2000-03-28 18:45:19 +00:00
kleink a51ef2c3f9 Kill duplicate uvn_attach() prototype (public, already in uvm_vnode.h). 2000-03-27 16:58:23 +00:00
kleink 6e5b64c8a0 Merge parts of chs-ubc2 into the trunk:
Add a new type voff_t (defined as a synonym for off_t) to describe offsets
into uvm objects, and update the appropriate interfaces to use it, the
most visible effect being the ability to mmap() file offsets beyond
the range of a vaddr_t.

Originally by Chuck Silvers; blame me for problems caused by merging this
into non-UBC.
2000-03-26 20:54:45 +00:00
kleink 08025fbc20 Kill duplicate udv_attach() prototype; it's a public interface, and declared
in uvm_device.h.
2000-03-26 20:46:59 +00:00
soren 95054da1a1 Fix doubled 'the's in comments. 2000-03-13 23:52:25 +00:00
thorpej 9671588a30 Allocate the page buckets out of kernel_map, not kmem_map. Saves 16
or so kmem_map pages on a 32MB SPARCstation 2.
2000-02-13 03:34:40 +00:00
thorpej eb9cbbe294 Add some very simple code to auto-size the kmem_map. We take the
amount of physical memory, divide it by 4, and then allow machine
dependent code to place upper and lower bounds on the size.  Export
the computed value to userspace via the new "vm.nkmempages" sysctl.

NKMEMCLUSTERS is now deprecated and will generate an error if you
attempt to use it.  The new option, should you choose to use it,
is called NKMEMPAGES, and two new options NKMEMPAGES_MIN and
NKMEMPAGES_MAX allow the user to configure the bounds in the kernel
config file.
2000-02-11 19:22:52 +00:00
thorpej fe551f0e64 Fix a bug in disksort_*() which caused non-optimal ordering when multiple
active partitions were on a single spindle.  Add a b_rawblkno member to
struct buf which contains the non-partition-relative block number to sort
by.
2000-02-07 20:16:47 +00:00
chs 26c744c85b remove a debug printf that has outlived its usefulness. 2000-01-28 08:02:48 +00:00
thorpej 0b0aecffd6 Update for sys/buf.h/disksort_*() changes. 2000-01-21 23:43:10 +00:00
chs 16f0ca3612 add support for ``swapctl -d'' (removing swap space).
improve handling of i/o errors in swap space.

reviewed by:  Chuck Cranor
2000-01-11 06:57:49 +00:00
wrstuden 56f2ef9f29 Revert rev 1.28 -> 1.29. The VOP_CLOSE call was happeneing with the vnode
already locked, so don't lock it here.
2000-01-04 21:37:54 +00:00
eeh c0ac678704 I should have made uvm_page_physload() take paddr_t's instead of vaddr_t's.
Also, add uvm_coredump32().
1999-12-30 16:09:47 +00:00
thorpej 7287dd22c6 Remove a piece of code introduced in rev 1.36 that I didn't intend to
commit.
1999-12-11 05:38:41 +00:00
fvdl 5277c5e448 CL* clearout 1999-12-04 23:14:40 +00:00
drochner 38e73c0c99 in uvm_page_physget(), try the vm_physmem[] chunks in the order of their
"free_list" attributes, to save DMA memory
1999-12-01 16:08:32 +00:00
thorpej 63494b0b50 Avoid an integer overflow on systems w/ more than 2G of RAM. 1999-11-30 18:34:23 +00:00
drochner b1f2453dee add a diagnostic panic to catch illegal memory ranges passed to
uvm_page_physload()
1999-11-24 18:28:49 +00:00
fvdl 0b1963121a Add Kirk McKusick's soft updates code to the trunk. Not enabled by
default, as the copyright on the main file (ffs_softdep.c) is such
that is has been put into gnusrc. options SOFTDEP will pull this
in. This code also contains the trickle syncer.

Bump version number to 1.4O
1999-11-15 18:49:07 +00:00
thorpej 1da427a80a Change the pmap_enter() API slightly; pmap_enter() now returns an error
value (KERN_SUCCESS or KERN_RESOURCE_SHORTAGE) indicating if it succeeded
or failed.  Change the `wired' and `access_type' arguments to a single
`flags' argument, which includes the access type, and flags:

	PMAP_WIRED	the old `wired' boolean
	PMAP_CANFAIL	pmap_enter() is allowed to fail

If PMAP_CANFAIL is not specified, the pmap should behave as it always
has in the face of a drastic resource shortage: fall over dead.

Change the fault handler to deal with failure (which indicates resource
shortage) by unlocking everything, waiting for the pagedaemon to free
more memory, then retrying the fault.
1999-11-13 00:24:38 +00:00
thorpej a25b1ab916 Always pass all arguments to uvm_sleep(). 1999-11-13 00:21:17 +00:00
thorpej 8e930a51fe Const poison uvm_wait(). 1999-11-04 21:51:42 +00:00
ross 0f2e70dfa4 Patch from chuq for uvm r/w map oscillation bug.
Fixes the XalphaNetBSD slowdown.
1999-10-24 16:29:23 +00:00
chs b16ae5a8a5 put various debugging printfs under #ifdef DEBUG. 1999-10-19 16:04:45 +00:00
wrstuden e682a080e9 In spec_close(), if we're not doing a non-blocking close and VXLOCK is
not set, unlock the vnode before calling the device's close routine and
relock it after it returns. tty close routines will sleep waiting for
buffers to drain, which won't happen often times as the other side needs
to grab the vnode lock first.

Make all unmount routines lock the device vnode before calling VOP_CLOSE().
1999-10-16 23:53:26 +00:00
chs f3a668ed84 eliminate the PMAP_NEW option by making it required for all ports.
ports which previously had no support for PMAP_NEW now implement
the pmap_k* interfaces as wrappers around the non-k versions.
1999-09-12 01:16:55 +00:00
thorpej 23e83a7ac7 When handling the MADV_FREE case, if the amap or aobj has more than
one reference, go through the deactivate path; the page may actually
be in use by another process.

Fixes kern/8239.
1999-08-21 02:19:05 +00:00
ross 7c367407aa In uvm_anon_init() and uvm_anon_add(), initialize the ref count lock. 1999-08-14 06:25:48 +00:00
thorpej 050aaac26e Fix the error recovery in uvm_map_pageable_all(). 1999-08-03 00:38:33 +00:00
thorpej ea8fb3e04a Turn the proclist lock into a read/write spinlock. Update proclist locking
calls to reflect this.  Also, block statclock rather than softclock during
in the proclist locking functions, to address a problem reported on
current-users by Sean Doran.
1999-07-25 06:30:33 +00:00
thorpej 3f176180d5 Garbage collect thread_sleep()/thread_wakeup() left over from the old
Mach VM code.  Also nuke iprintf(), which was no longer used anywhere.

Add proclist locking where appropriate.
1999-07-22 22:58:38 +00:00
thorpej 2c668fb0d4 0 -> FALSE in a few places. 1999-07-22 21:27:32 +00:00
cgd 4eb46531af make sure 'wide' fault handling is actually done only once per fault.
('narrow' was mistakenly set to FALSE instead of TRUE.)  Committed after
discussion with chuq.
1999-07-19 19:02:22 +00:00
thorpej 5310e69363 Fix PR #8023 from Bernd Ernesti: when MADV_FREE'ing a region which spanned
more than one VM map entry, a typo caused amap_unadd() to attempt to
remove anons from the wrong amap.  Fix that typo.
1999-07-19 17:45:23 +00:00
chs a8f10f9e37 allow uvm_km_alloc_poolpage1() to use kernel-reserve pages. 1999-07-18 22:55:30 +00:00
thorpej 5ee6f3960d Rework uvm_map_protect():
- Fix some locking bugs; a couple of places would return an error condition
  without unlocking the map.
- Deal with maps marked WIREFUTURE; if making an entry VM_PROT_NONE ->
  anything else, and it is not already marked as wired, wire it.
1999-07-18 00:41:56 +00:00
thorpej b6f435026c Add a set of "lockflags", which can control the locking behavior
of some functions.  Use these flags in uvm_map_pageable() to determine
if the map is locked on entry (replaces an already present boolean_t
argument `islocked'), and if the function should return with the map
still locked.
1999-07-17 21:35:49 +00:00
thorpej fcc55e7687 Garbage-collect uvm_km_get(); nothing actually uses it. 1999-07-17 06:41:36 +00:00
thorpej a448b59581 Implement uao_flush(). This is pretty much identical to the "amap flush"
code in uvm_map_clean().
1999-07-17 06:06:36 +00:00
thorpej 8e06a75bcb Fix an operator precedence error which caused msync(2) to fail to pass
the PGO_CLEANIT flag to the object pagers.  Fixes PR #7978, from
Matthias Pfaller.
1999-07-14 21:06:30 +00:00
kleink e79a283e47 XSH5: change function signature to `void *sbrk(intptr_t)'. 1999-07-12 21:55:19 +00:00
thorpej ff05773b4a Back out the change I made yesterday. It seems to cause some trouble
for some folks.
1999-07-11 17:47:12 +00:00
thorpej a0555db3e0 Simplify uvm_fault_unwire_locked() a little. 1999-07-10 21:46:56 +00:00
thorpej c0389be5da Make a comment reflect reality. 1999-07-10 20:40:23 +00:00
thorpej d75fb0f6b0 Slightly better test for "object with no real pages". Test for NULL
pgo_releasepg rather than if the pager is the device pager.
1999-07-10 20:29:24 +00:00
thorpej 3ebbe095e0 Change the pmap_extract() interface to:
boolean_t pmap_extract(pmap_t, vaddr_t, paddr_t *);
This makes it possible for the pmap to map physical address 0.
1999-07-08 18:05:21 +00:00
thorpej 6885fbe3d1 Teeny bit of style policing. 1999-07-08 01:02:44 +00:00
thorpej ec74ea9486 Correct a comment. 1999-07-08 00:52:45 +00:00
thorpej 4ef1f3670d Fix a thinko which could cause a NULL pointer deref, in the PGO_FREE
case.
1999-07-07 21:51:35 +00:00
thorpej 62dcdc109b In the PGO_FREE case of uvm_map_clean()'s amap cleaning, skip wired
pages.

XXX This should be handled better in the future, probably by marking the
XXX page as released, and making uvm_pageunwire() free the page when
XXX the wire count on a released page reaches zero.
1999-07-07 21:04:22 +00:00
thorpej 4e398a6ded Add some more meat to madvise(2):
* Implement MADV_DONTNEED: deactivate pages in the specified range,
  semantics similar to Solaris's MADV_DONTNEED.
* Add MADV_FREE: free pages and swap resources associated with the
  specified range, causing the range to be reloaded from backing
  store (vnodes) or zero-fill (anonymous), semantics like FreeBSD's
  MADV_FREE and like Digital UNIX's MADV_DONTNEED (isn't it SO GREAT
  that madvise(2) isn't standardized!?)

As part of this, move the non-map-modifying advice handling out of
uvm_map_advise(), and into sys_madvise().

As another part, implement general amap cleaning in uvm_map_clean(), and
change uvm_map_clean() to only push dirty pages to disk if PGO_CLEANIT
is set in its flags (and update sys___msync13() accordingly).  XXX Add
a patchable global "amap_clean_works", defaulting to 1, which can disable
the amap cleaning code, just in case problems are unearthed; this gives
a developer/user a quick way to recover and send a bug report (e.g. boot
into DDB and change the value).

XXX Still need to implement a real uao_flush().

XXX Need to update the manual page.

With these changes, rebuilding libc will automatically cause the new
malloc(3) to use MADV_FREE to actually release pages and swap resources
when it decides that can be done.
1999-07-07 06:02:21 +00:00
thorpej f631c1adae Update a comment in uao_flush(). 1999-07-07 05:32:26 +00:00
thorpej 121fe0bc26 Don't bother returning the "slot" number from amap_add():
* Nothing currently uses this return value.
* It's arguably an abstraction violation.

Fix amap_unadd()'s API to be consistent w/ amap_add()'s: rather than
take a vm_amap * and a slot number, take a vm_aref * and an offset.

It's now actually possible to use amap_unadd() to remove an anon from
an amap.
1999-07-07 05:31:40 +00:00
cgd c1b7b40399 from the comment added to the code:
> XXX (in)sanity check.  We don't do proper datasize checking
> XXX for anonymous (or private writable) mmap().  However,
> XXX know that if we're trying to allocate more than the amount
> XXX remaining under our current data size limit, _that_ should
> XXX be disallowed.
This is one link on the chain of lossage known as PR#7897.  It's
definitely not the right fix, but it's better than nothing.
1999-07-06 02:31:05 +00:00
cgd 5cc6a54251 fix allocation handling bugs in amap_alloc1(). if the first or second
sub-structure malloc() failed, it was quite likely that the function
would return success incorrectly.  This is this direct cause of the bug
reported in PR#7897.  (Thanks to chs for helping to track it down.)
1999-07-06 02:15:53 +00:00
thorpej 3c83723113 Bring in additional uvmexp members from chs-ubc2, so that VM stats can
be read no matter which kernel you're running.
1999-07-02 23:20:58 +00:00
thorpej 11c67d01a5 Fix a corner case locking error, which could lead to map corruption in
SMP environments.  See comments in <vm/vm_map.h> for details.
1999-07-01 20:07:05 +00:00
thorpej c859e43fbb Fix tyop. From Bill Studenmund. 1999-07-01 18:40:39 +00:00
thorpej abb48c5b71 Protect prototypes, certain macros, and inlines from userland. 1999-06-21 17:25:11 +00:00
thorpej 72fcd1784e Fix a typo. 1999-06-19 00:11:17 +00:00
thorpej 9e9f068f43 Add the guts of mlockall(MCL_FUTURE). This requires that a process's
"memlock" resource limit to uvm_mmap().  Update all calls accordingly.
1999-06-18 05:13:45 +00:00
thorpej 01ac9b6529 In sys_mmap():
- rather than treating MAP_COPY like MAP_PRIVATE by sheer virtue of it not
  being MAP_SHARED, actually convert the MAP_COPY flag into MAP_PRIVATE.
- return EINVAL if MAP_SHARED and MAP_PRIVATE are both included in flags.
1999-06-17 21:05:19 +00:00
thorpej 0288ffb53a pmap_change_wiring() -> pmap_unwire(). 1999-06-17 19:23:20 +00:00
thorpej f5a527bb4e Remove pmap_pageable(); no pmap implements it, and it is not really useful,
because pmap_enter()/pmap_change_wiring() (soon to be pmap_unwire())
communicate the information in greater detail.
1999-06-17 18:21:21 +00:00
thorpej 12347b2657 Make uvm_vslock() return the error code from uvm_fault_wire(). All places
which use uvm_vslock() should now test the return value.  If it's not
KERN_SUCCESS, wiring the pages failed, so the operation which is using
uvm_vslock() should error out.

XXX We currently just EFAULT a failed uvm_vslock().  We may want to do
more about translating error codes in the future.
1999-06-17 15:47:22 +00:00
thorpej 1f97ad987f In uvm_useracc(), make sure we have a read lock on the map before
calling uvm_map_checkprot().
1999-06-17 05:57:33 +00:00
thorpej f274deb90a The i386 and pc532 pmaps are officially fixed. 1999-06-17 00:24:10 +00:00
thorpej d1d9b366cd When unwiring a range in uvm_fault_unwire_locked(), don't call
pmap_change_wiring(...,FALSE) unless the map entry claims the address
is unwired.  This fixes the following scenario, as described on
tech-kern@netbsd.org on Wed 6/16/1999 12:25:23:

	- User mlock(2)'s a buffer, to guarantee it will never become
	  non-resident while he is using it.

	- User then does physio to that buffer.  Physio calls uvm_vslock()
	  to lock down the pages and ensure that page faults do not happen
	  while the I/O is in progress (possibly in interrupt context).

	- Physio does the I/O.

	- Physio calls uvm_vsunlock().  This calls uvm_fault_unwire().

	  >>> HERE IS WHERE THE PROBLEM OCCURS <<<

	  uvm_fault_unwire() calls pmap_change_wiring(..., FALSE),
	  which now gives the pmap free reign to recycle the mapping
	  information for that page, which is illegal; the mapping is
	  still wired (due to the mlock(2)), but now access of the
	  page could cause a non-protection page fault (disallowed).

	  NOTE: This could eventually lead to a panic when the user
	  subsequently munlock(2)'s the buffer and the mapping info
	  has been recycled for use by another mapping!
1999-06-16 23:02:40 +00:00
thorpej b861180119 * Rename uvm_fault_unwire() to uvm_fault_unwire_locked(), and require that
the map be at least read-locked to call this function.  This requirement
  will be taken advantage of in a future commit.
* Write a uvm_fault_unwire() wrapper which read-locks the map and calls
  uvm_fault_unwire_locked().
* Update the comments describing the locking contraints of uvm_fault_wire()
  and uvm_fault_unwire().
1999-06-16 22:11:23 +00:00
thorpej 42c671ffba Modify uvm_map_pageable() and uvm_map_pageable_all() to follow POSIX 1003.1b
semantics.  That is, regardless of the number of mlock/mlockall calls,
an munlock/munlockall actually unlocks the region (i.e. sets wiring count
to 0).

Add a comment describing why uvm_map_pageable() should not be used for
transient page wirings (e.g. for physio) -- note, it's currently only
(ab)used in this way by a few pieces of code which are known to be
broken, i.e. the Amiga and Atari pmaps, and i386 and pc532 if PMAP_NEW is
not used.  The i386 GDT code uses uvm_map_pageable(), but in a safe
way, and could be trivially converted to use uvm_fault_wire() instead.
1999-06-16 19:34:24 +00:00
thorpej 23c6eb95d3 Remove a incorrect-and-no-longer-relevant comment. 1999-06-16 18:43:28 +00:00
minoura ff8fb3ef82 Remove extra ]. 1999-06-16 17:25:39 +00:00
thorpej ee9703dea9 Add a macro to test if a map entry is wired. 1999-06-16 00:29:04 +00:00
thorpej c5a43ae10c Several changes, developed and tested concurrently:
* Provide POSIX 1003.1b mlockall(2) and munlockall(2) system calls.
  MCL_CURRENT is presently implemented.  MCL_FUTURE is not fully
  implemented.  Also, the same one-unlock-for-every-lock caveat
  currently applies here as it does to mlock(2).  This will be
  addressed in a future commit.
* Provide the mincore(2) system call, with the same semantics as
  Solaris.
* Clean up the error recovery in uvm_map_pageable().
* Fix a bug where a process would hang if attempting to mlock a
  zero-fill region where none of the pages in that region are resident.
  [ This fix has been submitted for inclusion in 1.4.1 ]
1999-06-15 23:27:47 +00:00
thorpej 45e5ec934e Use a more descriptive wait message for VM map locks. 1999-06-14 22:05:23 +00:00
thorpej 5de7bac9b1 Print the maps flags in "show map" from DDB. 1999-06-07 16:31:42 +00:00
thorpej 2c3dc83a64 Keep interrupt-safe maps on an additional queue. In uvm_fault(), if we're
looking up a kernel address, check to see if the address is on this
"interrupt-safe" list.  If so, return failure immediately.  This prevents
a locking screw if a page fault is taken on an interrupt-safe map in or
out of interrupt context.
1999-06-04 23:38:41 +00:00
thorpej 8c59c67288 Just say no to interrupt-safe maps. 1999-06-03 00:05:45 +00:00
thorpej acf81da21e A page fault on a non-pageable map is always fatal. 1999-06-02 23:26:21 +00:00
thorpej 779ecdd773 Simplify the last even more; We downgraded to a shared (read) lock, so
setting recursive has no effect!  The kernel lock manager doesn't allow
an exclusive recursion into a shared lock.  This situation must simply
be avoided.  The only place where this might be a problem is the (ab)use
of uvm_map_pageable() in the Utah-derived pmaps for m68k (they should
either toss the iffy scheme they use completely, or use something like
uvm_fault_wire()).

In addition, once we have looped over uvm_fault_wire(), only upgrade to
an exclusive (write) lock if we need to modify the map again (i.e.
wiring a page failed).
1999-06-02 22:40:51 +00:00
thorpej 0723d57281 Clean up the locking mess in uvm_map_pageable() a little... Most importantly,
don't unlock a kernel map (!!!) and then relock it later; a recursive lock,
as it used in the user map case, is fine.  Also, don't change map entries
while only holding a read lock on the map.  Instead, if we fail to wire
a page, clear recursive locking, and upgrade back to a write lock before
dropping the wiring count on the remaining map entries.
1999-06-02 21:23:08 +00:00
mrg 2332079d3f unlock the map for unknown arguments to uvm_map_advise. from Soren S. Jorvang in PR kern/7681 1999-05-31 23:36:23 +00:00
thorpej fb36fe649a A little spring cleaning in the unwire case of uvm_map_pageable(). 1999-05-28 22:54:12 +00:00
thorpej 8d8badbd8f Make uvm_fault_unwire() take a vm_map_t, rather than a pmap_t, for
consistency.  Use this opportunity for checking for intrsafe map use
in this routine (which is illegal).
1999-05-28 20:49:51 +00:00
thorpej 108b13d5a9 Make "intrsafe" maps locked only by exclusive spin locks, never sleep
locks (and thus, never shared locks).  Move the "set/clear recursive"
functions to uvm_map.c, which is the only placed they're used (and
they should go away anyhow).  Delete some unused cruft.
1999-05-28 20:31:42 +00:00
thorpej 5920638afe Change the main comment block to indicate why PMAP_NEW (specifically,
pmap_kenter*()) is not required for {O,A}->K page loans.
1999-05-27 21:50:03 +00:00
thorpej 80de1e9903 Upon further investigation, in uvm_map_pageable(), entry->protection is the
right access_type to pass to uvm_fault_wire().  This way, if the entry has
VM_PROT_WRITE, and the entry is marked COW, the copy will happen immediately
in uvm_fault(), as if the access were performed.
1999-05-26 23:53:48 +00:00
thorpej 6b655611b1 Wired kernel mappings are wired; pass VM_PROT_READ|VM_PROT_WRITE for
access_type to pmap_enter() to ensure that when these mappings are accessed,
possibly in interrupt context, that they won't cause mod/ref emulation
page faults.
1999-05-26 19:27:49 +00:00
thorpej 2580d306ab Change the vm_map's "entries_pageable" member to a r/o flags member, which
has PAGEABLE and INTRSAFE flags.  PAGEABLE now really means "pageable",
not "allocate vm_map_entry's from non-static pool", so update all map
creations to reflect that.  INTRSAFE maps are maps that are used in
interrupt context (e.g. kmem_map, mb_map), and thus use the static
map entry pool (XXX as does kernel_map, for now).  This will eventually
change now these maps are locked, as well.
1999-05-26 19:16:28 +00:00
thorpej 00a1f75cf6 In uvm_pagermapin(), pass VM_PROT_READ|VM_PROT_WRITE as access_type, to
ensure we don't take mod/ref emulation faults in an interrupt context
(e.g. during the i/o operation).  This is safe because:
	- For a pageout operation, the page is already known to be
	  modified, and the pagedaemon will pmap_clear_modify() after
	  the pageout has completed.
	- For a pagein operation, pagers must already pmap_clear_modify()
	  after the pagein operation is complete, because the i/o may have
	  been done with e.g. programmed i/o.
XXX It would be nice to know the i/o direction so that we can call
XXX pmap_enter() with only the protection and access_type necessary.
1999-05-26 06:42:57 +00:00
thorpej b2e9c635ec Pass an access_type to uvm_vslock(). 1999-05-26 01:05:24 +00:00
thorpej 7b4db806b6 In uvm_map_pageable(), pass VM_PROT_NONE as access type to uvm_fault_wire()
for now.  XXX This needs to be reexamined.
1999-05-26 00:36:53 +00:00
thorpej 9d0ea0969e - uvm_fork()/uvm_swapin(): pass VM_PROT_READ|VM_PROT_WRITE access_type
to uvm_fault_wire(), to guarantee that the kernel stacks will not
  cause even a mod/ref emulation fault.
- uvm_vslock(): pass VM_PROT_NONE until this function is updated.
1999-05-26 00:33:52 +00:00
thorpej 195c1a2741 Pass an access_type to uvm_fault_wire(), which it forwards on to
uvm_fault().
1999-05-26 00:32:42 +00:00
thorpej 0ff8d3ac1a Define a new kernel object type, "intrsafe", which are used for objects
which can be used in an interrupt context.  Use pmap_kenter*() and
pmap_kremove() only for mappings owned by these objects.

Fixes some locking protocol issues related to MP support, and eliminates
all of the pmap_enter vs. pmap_kremove inconsistencies.
1999-05-25 20:30:08 +00:00
thorpej 789c9e7c48 Add a comment explaining why using pmap_kenter_pa() is safe here. 1999-05-25 01:34:13 +00:00
thorpej 85f8d1343c Macro'ize the test for "object is a kernel object". 1999-05-25 00:09:00 +00:00
thorpej 9b731fd45c Remove a comment in uvm_pager_dropcluster() about PMAP_NEW and mod/ref
attributes for the page; it no longer applies, since we don't use
pmap_kenter_pgs() anymore.
1999-05-24 23:36:23 +00:00
thorpej 7becac6b9a Don't use pmap_kenter_pgs() for entering pager_map mappings. The pages
are still owned by the object which is paging, and so the test for a kernel
object in uvm_unmap_remove() will cause pmap_remove() to be used instead
of pmap_kremove().

This was a MAJOR source of pmap_remove() vs pmap_kremove() inconsistency
(which caused the busted kernel pmap statistics, and a cause of much
locking hair on MP systems).
1999-05-24 23:30:44 +00:00
thorpej 6eb9ee7cd8 - Change uvm_{lock,unlock}_fpageq() to return/take the previous interrupt
level directly, instead of making the caller wrap the calls in
  splimp()/splx().
- Add a comment documenting that interrupts that cause memory allocation
  must be blocked while the free page queue is locked.

Since interrupts must be blocked while this lock is asserted, tying them
together like this helps to prevent mistakes.
1999-05-24 19:10:57 +00:00
mrg f1f95c374b implement madvice() for MADV_{NORMAL,RANDOM,SEQUENTIAL}, others are not yet done. 1999-05-23 06:27:13 +00:00
thorpej f311a1c308 Make a slight modification of pmap_growkernel() -- it now returns the
end of the mappable kernel virtual address space.  Previously, it would
get called more often than necessary, because the caller only new what
was requested.

Also, export uvm_maxkaddr so that uvm_pageboot_alloc() can grow the
kernel pmap if necessary, as well.  Note that pmap_growkernel() must
now be able to handle being called before pmap_init().
1999-05-20 23:03:23 +00:00
thorpej 1d197b8e7b If we run out of virtual space in uvm_pageboot_alloc(), fail gracefully
rather than unpredictably.
1999-05-20 20:07:55 +00:00
chs a5d3e8dae9 when wiring swap-backed pages, clear the PG_CLEAN flag before
releasing any swap resources.  if we don't do this, we can
end up with a clean, swap-backed page, which is illegal.
tracked down by Bill Sommerfeld, fixes PR 7578.
1999-05-19 06:14:15 +00:00
thorpej c10a926030 Allow the caller to specify a stack for the child process. If NULL,
the child inherits the stack pointer from the parent (traditional
behavior).  Like the signal stack, the stack area is secified as
a low address and a size; machine-dependent code accounts for stack
direction.

This is required for clone(2).
1999-05-13 21:58:32 +00:00
thorpej f5108f64e7 Add an optional pmap hook, pmap_fork(), to be called at the end of
uvmspace_fork().

pmap_fork() is used to "fork a pmap", that is copy data from one pmap
to the other that is NOT related to actual mappings in the pmap, but is
otherwise logically coupled to the address space.
1999-05-12 19:11:23 +00:00