Commit Graph

105 Commits

Author SHA1 Message Date
chris 0e7661f023 Update pmap_update to now take the updated pmap as an argument.
This will allow improvements to the pmaps so that they can more easily defer expensive operations, eg tlb/cache flush, til the last possible moment.

Currently this is a no-op on most platforms, so they should see no difference.

Reviewed by Jason.
2001-09-10 21:19:08 +00:00
chs 2133049a7c create a new pool for map entries, allocated from kmem_map instead of
kernel_map.  use this instead of the static map entries when allocating
map entries for kernel_map.  this greatly reduces the number of static
map entries used and should eliminate the problems with running out.
2001-09-09 19:38:22 +00:00
lukem 53156d96d0 let user know current value of MAX_KMAPENT in panic 2001-09-07 00:50:54 +00:00
wiz c52d355d71 "wierd" is weird. 2001-08-20 12:20:01 +00:00
chs e9fbc91f95 user maps are always pageable. 2001-08-16 01:37:50 +00:00
wiz a9356936b4 seperate -> separate 2001-07-22 13:33:58 +00:00
chs 821ec03ed9 replace vm_map{,_entry}_t with struct vm_map{,_entry} *. 2001-06-02 18:09:08 +00:00
chs 3845302904 remove trailing whitespace. 2001-05-25 04:06:11 +00:00
ross 892627dd05 Merge the swap-backed and object-backed inactive lists. 2001-05-22 00:44:44 +00:00
thorpej cda7baa0d5 Implement page coloring, using a round-robin bucket selection
algorithm (Solaris calls this "Bin Hopping").

This implementation currently relies on MD code to define a
constant defining the number of buckets.  This will change
reasonably soon (MD code will be able to dynamically size
the bucket array).
2001-04-29 04:23:20 +00:00
thorpej 1c3a62e066 Sprinkle pmap_update() calls after calls to:
- pmap_enter()
- pmap_remove()
- pmap_protect()
- pmap_kenter_pa()
- pmap_kremove()
as described in pmap(9).

These calls are relatively conservative.  It may be possible to
optimize these a little more.
2001-04-24 04:30:50 +00:00
chs ac3bc537bd eliminate the KERN_* error codes in favor of the traditional E* codes.
the mapping is:

KERN_SUCCESS			0
KERN_INVALID_ADDRESS		EFAULT
KERN_PROTECTION_FAILURE		EACCES
KERN_NO_SPACE			ENOMEM
KERN_INVALID_ARGUMENT		EINVAL
KERN_FAILURE			various, mostly turn into KASSERTs
KERN_RESOURCE_SHORTAGE		ENOMEM
KERN_NOT_RECEIVER		<unused>
KERN_NO_ACCESS			<unused>
KERN_PAGES_LOCKED		<unused>
2001-03-15 06:10:32 +00:00
eeh 4589ac3292 When recycling a vm_map, resize it to the new process address space limits. 2001-02-11 01:34:23 +00:00
thorpej b016744976 Don't uvm_deallocate() the address space in exit1(). The address
space is already torn down in uvmspace_free() when the vmspace
refrence count reaches 0.  Move the shmexit() call into uvmspace_free().

Note that there is a beneficial side-effect of deferring the unmap
to uvmspace_free() -- on systems where TLB invalidations are
particularly expensive, the unmapping of the address space won't
have to cause TLB invalidations; uvmspace_free() is going to be
run in a context other than the exiting process's, so the "pmap is
active" test will evaluate to FALSE in the pmap module.
2001-02-10 05:05:27 +00:00
eeh 4380259bc7 Specify a process' address space limits for uvmspace_exec(). 2001-02-06 17:01:51 +00:00
chs 4d5451090e in uvm_map_clean(), fix the case where the start offset is within the last
entry in the map.  the old code would walk around the end of the linked list,
through the header entry, and keep going from the first map entry until it
found a gap in the map, at which point it would return an error.  if the map
had no gaps then it would loop forever.  reported by k-abe@cs.utah.edu.
while I'm here, clean up this function a bit.

also, use MIN() instead of min(), since the latter takes arguments of
type "int" but we're passing it values of type "vaddr_t", which can be
a larger size.
2001-02-05 11:29:54 +00:00
thorpej 1779f8f71b Page scanner improvements, behavior is actually a bit more like
Mach VM's now.  Specific changes:
- Pages now need not have all of their mappings removed before being
  put on the inactive list.  They only need to have the "referenced"
  attribute cleared.  This makes putting pages onto the inactive list
  much more efficient.  In order to eliminate redundant clearings of
  "refrenced", callers of uvm_pagedeactivate() must now do this
  themselves.
- When checking the "modified" attribute for a page (for clearing
  PG_CLEAN), make sure to only do it if PG_CLEAN is currently set on
  the page (saves a potentially expensive pmap operation).
- When scanning the inactive list, if a page is referenced, reactivate
  it (this part was actually added in uvm_pdaemon.c,v 1.27).  This
  now works properly now that pages on the inactive list are allowed to
  have mappings.
- When scanning the inactive list and considering a page for freeing,
  remove all mappings, and then check the "modified" attribute if the
  page is marked PG_CLEAN.
- When scanning the active list, if the page was referenced since its
  last sweep by the scanner, don't deactivate it.  (This part was
  actually added in uvm_pdaemon.c,v 1.28.)

These changes greatly improve interactive performance during
moderate to high memory and I/O load.
2001-01-28 23:30:42 +00:00
thorpej f4395a4eae splimp() -> splvm() 2001-01-14 02:10:01 +00:00
enami 4625dcde2e Use single const char array instead of over 200 string constant. 2000-12-13 08:06:11 +00:00
chs aeda8d3b77 Initial integration of the Unified Buffer Cache project. 2000-11-27 08:39:39 +00:00
chs 2ed28d2c7a lots of cleanup:
use queue.h macros and KASSERT().
address amap offsets in pages instead of bytes.
make amap_ref() and amap_unref() take an amap, offset and length
  instead of a vm_map_entry_t.
improve whitespace and comments.
2000-11-25 06:27:59 +00:00
thorpej 0a2fa5320b Back out rev. 1.83 -- it's causing problems with some pmap
implementations, so we'll have to spend a little more time
working on the problem.
2000-10-16 23:17:54 +00:00
thorpej 76589fafd4 - uvmspace_share(): If p2 has a vmspace already, make sure to deactivate
it and free it as appropriate.  Activate p2's new address space once
  it references p1's.
- uvm_fork(): Make sure the child's vmspace is NULL before calling
  uvmspace_share() (the child doens't have one already in this case).

These changes do not change the behavior for the current use of
uvmspace_share() (vfork(2)), but make it possible for an already
running process (such as a kernel thread) to properly attach to
another process's address space.
2000-10-11 17:27:58 +00:00
thorpej 47a2016cdc - Change SAVE_HINT() to take a "check" value. This value is compared
to the contents of the hint in the map, and the hint saved in the
  map only if the two values match.  When an unconditional save is
  required, the "check" value passed should be map->hint (and the
  compiler will optimize the test away).  When deleting a map entry,
  the new SAVE_HINT() will only change the hint if the entry being
  deleted was the hint value (thus preserving any meaningful hint
  that may have been there previously, rather than stomping on it).
- Add a missing hint update when deleting the map entry in
  uvm_map_entry_unlink().  This is the fix for kern/11125, from
  ITOH Yasufumi <itohy@netbsd.org>.
2000-10-11 17:21:11 +00:00
thorpej 72a24b4eae Add an align argument to uvm_map() and some callers of that
routine.  Works similarly fto pmap_prefer(), but allows callers
to specify a minimum power-of-two alignment of the region.
How we ever got along without this for so long is beyond me.
2000-09-13 15:00:15 +00:00
wiz be8ff811b7 Rename VM_INHERIT_* to MAP_INHERIT_* and move them to sys/sys/mman.h as
discussed on tech-kern.
Retire sys/uvm/uvm_inherit.h, update man page for minherit(2).
2000-08-01 00:53:07 +00:00
mrg dea44a9ec4 remove include of <vm/vm.h> 2000-06-27 17:29:17 +00:00
mrg 2f159a1bac remove/move more mach vm header files:
<vm/pglist.h> -> <uvm/uvm_pglist.h>
	<vm/vm_inherit.h> -> <uvm/uvm_inherit.h>
	<vm/vm_kern.h> -> into <uvm/uvm_extern.h>
	<vm/vm_object.h> -> nothing
	<vm/vm_pager.h> -> into <uvm/uvm_pager.h>

also includes a bunch of <vm/vm_page.h> include removals (due to redudancy
with <vm/vm.h>), and a scattering of other similar headers.
2000-06-26 14:20:25 +00:00
chs e72214422a initialize aref.ar_pageoff even if there's no amap. 2000-06-13 04:10:47 +00:00
pk 36a1354bc6 Change previous to use `vm_map_min(dstmap)' instead of hard-coding
VM_MIN_KERNEL_ADDRESS.
2000-06-05 07:28:56 +00:00
pk 51ff5f7cd1 Let uvm_map_extract() set the lower bound on the kernel address range
itself, in stead of having its callers do that.
2000-06-02 12:02:43 +00:00
thorpej 646555bbd5 Clean up some indentation lossage in uvm_map_extract(). 2000-05-19 17:43:55 +00:00
thorpej 9ec517a68e Changes necessary to implement pre-zero'ing of pages in the idle loop:
- Make page free lists have two actual queues: known-zero pages and
  pages with unknown contents.
- Implement uvm_pageidlezero().  This function attempts to zero up to
  the target number of pages until the target has been reached (currently
  target is `all free pages') or until whichqs becomes non-zero (indicating
  that a process is ready to run).
- Define a new hook for the pmap module for pre-zero'ing pages.  This is
  used to zero the pages using uncached access.  This allows us to zero
  as many pages as we want without polluting the cache.

In order to use this feature, each platform must add the appropropriate
glue in their idle loop.
2000-04-24 17:12:00 +00:00
chs d444bb4032 undo rev 1.13, which is to say, don't block interrupts while deactivating
one pmap and activating another.  this isn't actually necessary (since
pmap_activate() and pmap_deactivate() affect only user-level mappings,
which cannot be accessed from interrupts anyway), and pmap_activate()
is very slow on old sun4c sparcs so we can't block interrupts for this long.
this fixes PR 8322.
2000-04-16 20:52:29 +00:00
chs 66014d2dff sparc -> __sparc__
print lock status in uvm_object_printit().
2000-04-10 02:21:26 +00:00
kleink 6e5b64c8a0 Merge parts of chs-ubc2 into the trunk:
Add a new type voff_t (defined as a synonym for off_t) to describe offsets
into uvm objects, and update the appropriate interfaces to use it, the
most visible effect being the ability to mmap() file offsets beyond
the range of a vaddr_t.

Originally by Chuck Silvers; blame me for problems caused by merging this
into non-UBC.
2000-03-26 20:54:45 +00:00
chs f3a668ed84 eliminate the PMAP_NEW option by making it required for all ports.
ports which previously had no support for PMAP_NEW now implement
the pmap_k* interfaces as wrappers around the non-k versions.
1999-09-12 01:16:55 +00:00
thorpej 23e83a7ac7 When handling the MADV_FREE case, if the amap or aobj has more than
one reference, go through the deactivate path; the page may actually
be in use by another process.

Fixes kern/8239.
1999-08-21 02:19:05 +00:00
thorpej 050aaac26e Fix the error recovery in uvm_map_pageable_all(). 1999-08-03 00:38:33 +00:00
thorpej 5310e69363 Fix PR #8023 from Bernd Ernesti: when MADV_FREE'ing a region which spanned
more than one VM map entry, a typo caused amap_unadd() to attempt to
remove anons from the wrong amap.  Fix that typo.
1999-07-19 17:45:23 +00:00
thorpej 5ee6f3960d Rework uvm_map_protect():
- Fix some locking bugs; a couple of places would return an error condition
  without unlocking the map.
- Deal with maps marked WIREFUTURE; if making an entry VM_PROT_NONE ->
  anything else, and it is not already marked as wired, wire it.
1999-07-18 00:41:56 +00:00
thorpej b6f435026c Add a set of "lockflags", which can control the locking behavior
of some functions.  Use these flags in uvm_map_pageable() to determine
if the map is locked on entry (replaces an already present boolean_t
argument `islocked'), and if the function should return with the map
still locked.
1999-07-17 21:35:49 +00:00
thorpej 4ef1f3670d Fix a thinko which could cause a NULL pointer deref, in the PGO_FREE
case.
1999-07-07 21:51:35 +00:00
thorpej 62dcdc109b In the PGO_FREE case of uvm_map_clean()'s amap cleaning, skip wired
pages.

XXX This should be handled better in the future, probably by marking the
XXX page as released, and making uvm_pageunwire() free the page when
XXX the wire count on a released page reaches zero.
1999-07-07 21:04:22 +00:00
thorpej 4e398a6ded Add some more meat to madvise(2):
* Implement MADV_DONTNEED: deactivate pages in the specified range,
  semantics similar to Solaris's MADV_DONTNEED.
* Add MADV_FREE: free pages and swap resources associated with the
  specified range, causing the range to be reloaded from backing
  store (vnodes) or zero-fill (anonymous), semantics like FreeBSD's
  MADV_FREE and like Digital UNIX's MADV_DONTNEED (isn't it SO GREAT
  that madvise(2) isn't standardized!?)

As part of this, move the non-map-modifying advice handling out of
uvm_map_advise(), and into sys_madvise().

As another part, implement general amap cleaning in uvm_map_clean(), and
change uvm_map_clean() to only push dirty pages to disk if PGO_CLEANIT
is set in its flags (and update sys___msync13() accordingly).  XXX Add
a patchable global "amap_clean_works", defaulting to 1, which can disable
the amap cleaning code, just in case problems are unearthed; this gives
a developer/user a quick way to recover and send a bug report (e.g. boot
into DDB and change the value).

XXX Still need to implement a real uao_flush().

XXX Need to update the manual page.

With these changes, rebuilding libc will automatically cause the new
malloc(3) to use MADV_FREE to actually release pages and swap resources
when it decides that can be done.
1999-07-07 06:02:21 +00:00
thorpej 11c67d01a5 Fix a corner case locking error, which could lead to map corruption in
SMP environments.  See comments in <vm/vm_map.h> for details.
1999-07-01 20:07:05 +00:00
thorpej 9e9f068f43 Add the guts of mlockall(MCL_FUTURE). This requires that a process's
"memlock" resource limit to uvm_mmap().  Update all calls accordingly.
1999-06-18 05:13:45 +00:00
thorpej f274deb90a The i386 and pc532 pmaps are officially fixed. 1999-06-17 00:24:10 +00:00
thorpej b861180119 * Rename uvm_fault_unwire() to uvm_fault_unwire_locked(), and require that
the map be at least read-locked to call this function.  This requirement
  will be taken advantage of in a future commit.
* Write a uvm_fault_unwire() wrapper which read-locks the map and calls
  uvm_fault_unwire_locked().
* Update the comments describing the locking contraints of uvm_fault_wire()
  and uvm_fault_unwire().
1999-06-16 22:11:23 +00:00
thorpej 42c671ffba Modify uvm_map_pageable() and uvm_map_pageable_all() to follow POSIX 1003.1b
semantics.  That is, regardless of the number of mlock/mlockall calls,
an munlock/munlockall actually unlocks the region (i.e. sets wiring count
to 0).

Add a comment describing why uvm_map_pageable() should not be used for
transient page wirings (e.g. for physio) -- note, it's currently only
(ab)used in this way by a few pieces of code which are known to be
broken, i.e. the Amiga and Atari pmaps, and i386 and pc532 if PMAP_NEW is
not used.  The i386 GDT code uses uvm_map_pageable(), but in a safe
way, and could be trivially converted to use uvm_fault_wire() instead.
1999-06-16 19:34:24 +00:00