semantics. That is, regardless of the number of mlock/mlockall calls,
an munlock/munlockall actually unlocks the region (i.e. sets wiring count
to 0).
Add a comment describing why uvm_map_pageable() should not be used for
transient page wirings (e.g. for physio) -- note, it's currently only
(ab)used in this way by a few pieces of code which are known to be
broken, i.e. the Amiga and Atari pmaps, and i386 and pc532 if PMAP_NEW is
not used. The i386 GDT code uses uvm_map_pageable(), but in a safe
way, and could be trivially converted to use uvm_fault_wire() instead.
* Provide POSIX 1003.1b mlockall(2) and munlockall(2) system calls.
MCL_CURRENT is presently implemented. MCL_FUTURE is not fully
implemented. Also, the same one-unlock-for-every-lock caveat
currently applies here as it does to mlock(2). This will be
addressed in a future commit.
* Provide the mincore(2) system call, with the same semantics as
Solaris.
* Clean up the error recovery in uvm_map_pageable().
* Fix a bug where a process would hang if attempting to mlock a
zero-fill region where none of the pages in that region are resident.
[ This fix has been submitted for inclusion in 1.4.1 ]
looking up a kernel address, check to see if the address is on this
"interrupt-safe" list. If so, return failure immediately. This prevents
a locking screw if a page fault is taken on an interrupt-safe map in or
out of interrupt context.
setting recursive has no effect! The kernel lock manager doesn't allow
an exclusive recursion into a shared lock. This situation must simply
be avoided. The only place where this might be a problem is the (ab)use
of uvm_map_pageable() in the Utah-derived pmaps for m68k (they should
either toss the iffy scheme they use completely, or use something like
uvm_fault_wire()).
In addition, once we have looped over uvm_fault_wire(), only upgrade to
an exclusive (write) lock if we need to modify the map again (i.e.
wiring a page failed).
don't unlock a kernel map (!!!) and then relock it later; a recursive lock,
as it used in the user map case, is fine. Also, don't change map entries
while only holding a read lock on the map. Instead, if we fail to wire
a page, clear recursive locking, and upgrade back to a write lock before
dropping the wiring count on the remaining map entries.
locks (and thus, never shared locks). Move the "set/clear recursive"
functions to uvm_map.c, which is the only placed they're used (and
they should go away anyhow). Delete some unused cruft.
right access_type to pass to uvm_fault_wire(). This way, if the entry has
VM_PROT_WRITE, and the entry is marked COW, the copy will happen immediately
in uvm_fault(), as if the access were performed.
access_type to pmap_enter() to ensure that when these mappings are accessed,
possibly in interrupt context, that they won't cause mod/ref emulation
page faults.
has PAGEABLE and INTRSAFE flags. PAGEABLE now really means "pageable",
not "allocate vm_map_entry's from non-static pool", so update all map
creations to reflect that. INTRSAFE maps are maps that are used in
interrupt context (e.g. kmem_map, mb_map), and thus use the static
map entry pool (XXX as does kernel_map, for now). This will eventually
change now these maps are locked, as well.
ensure we don't take mod/ref emulation faults in an interrupt context
(e.g. during the i/o operation). This is safe because:
- For a pageout operation, the page is already known to be
modified, and the pagedaemon will pmap_clear_modify() after
the pageout has completed.
- For a pagein operation, pagers must already pmap_clear_modify()
after the pagein operation is complete, because the i/o may have
been done with e.g. programmed i/o.
XXX It would be nice to know the i/o direction so that we can call
XXX pmap_enter() with only the protection and access_type necessary.
to uvm_fault_wire(), to guarantee that the kernel stacks will not
cause even a mod/ref emulation fault.
- uvm_vslock(): pass VM_PROT_NONE until this function is updated.
which can be used in an interrupt context. Use pmap_kenter*() and
pmap_kremove() only for mappings owned by these objects.
Fixes some locking protocol issues related to MP support, and eliminates
all of the pmap_enter vs. pmap_kremove inconsistencies.
are still owned by the object which is paging, and so the test for a kernel
object in uvm_unmap_remove() will cause pmap_remove() to be used instead
of pmap_kremove().
This was a MAJOR source of pmap_remove() vs pmap_kremove() inconsistency
(which caused the busted kernel pmap statistics, and a cause of much
locking hair on MP systems).
level directly, instead of making the caller wrap the calls in
splimp()/splx().
- Add a comment documenting that interrupts that cause memory allocation
must be blocked while the free page queue is locked.
Since interrupts must be blocked while this lock is asserted, tying them
together like this helps to prevent mistakes.
end of the mappable kernel virtual address space. Previously, it would
get called more often than necessary, because the caller only new what
was requested.
Also, export uvm_maxkaddr so that uvm_pageboot_alloc() can grow the
kernel pmap if necessary, as well. Note that pmap_growkernel() must
now be able to handle being called before pmap_init().
releasing any swap resources. if we don't do this, we can
end up with a clean, swap-backed page, which is illegal.
tracked down by Bill Sommerfeld, fixes PR 7578.
the child inherits the stack pointer from the parent (traditional
behavior). Like the signal stack, the stack area is secified as
a low address and a size; machine-dependent code accounts for stack
direction.
This is required for clone(2).
uvmspace_fork().
pmap_fork() is used to "fork a pmap", that is copy data from one pmap
to the other that is NOT related to actual mappings in the pmap, but is
otherwise logically coupled to the address space.
define a flag UVM_PGA_USERESERVE to allow non-kernel object
allocations to use pages from the reserve.
use the new flag for allocations in pmap modules.
memory access a mapping was caused by. This is passed through from uvm_fault()
and udv_fault(), and in most other cases is 0.
The pmap module may use this to preset R/M information. On MMUs which require
R/M emulation, the implementation may preset the bits and avoid taking another
fault. On MMUs which keep R/M information in hardware, the implementation may
preset its cached bits to speed up the next call to pmap_is_modified() or
pmap_is_referenced().
numerous pagedaemon improvements were needed to make this useful:
- don't bother waking up procs waiting for memory if there's none to be had.
- start 4 times as many pageouts as we need free pages.
this should reduce latency in low-memory situations.
- in inactive scanning, if we find dirty swap-backed pages when swap space
is full of non-resident pages, reactivate some number of these to flush
less active pages to the inactive queue so we can consider paging them out.
this replaces the previous scheme of inactivating pages beyond the
inactive target when we failed to free anything during inactive scanning.
- during both active and inactive scanning, free any swap resources from
dirty swap-backed pages if swap space is full. this allows other pages
be paged out into that swap space.
and system call now just return EFAULT). A complete fix will
presumably have to wait for UBC and/or for vnode locking protocols to
be revamped to allow use of shared locks.
offset and size of the requested region to be mapped, so that the
udv_attach() can use the device d_mmap() entry to check mappability
of the requested region.
- break anon related functions out of uvm_amap.c and put them in their own
file (uvm_anon.c). includes break up uvm_anon_init into an amap and an
an anon init function
- ensure that only functions within the amap module access amap structure
fields (add macros to amap api as needed)
thing would be to allocate the block, but I don't know how to do this.
The panic is preferable to the random memory corruption the old code
was causing.
- replace map checks with submap checks
- get rid of unused 'mainonly' arg in uvm_unmap/uvm_unmap_remove, simplify
code. update all calls to reflect this.
- don't worry about unmapping or changing the protection of shared share
map mappings (is_main_map no longer used).
- remove unused uvm_map_sharemapcopy() function from fork code.
- simplify uvm_faultinfo in uvm_fault.h (parent map tracking no longer needed)
- adjust locking and lookup functions in uvm_fault_i.h to reflect the above
- replace ufi.rvaddr with ufi.orig_rvaddr in uvm_fault.c since rvaddr is
no longer needed.
- no need to worry about share map translations in uvm_fault(). simplify.