are still owned by the object which is paging, and so the test for a kernel
object in uvm_unmap_remove() will cause pmap_remove() to be used instead
of pmap_kremove().
This was a MAJOR source of pmap_remove() vs pmap_kremove() inconsistency
(which caused the busted kernel pmap statistics, and a cause of much
locking hair on MP systems).
level directly, instead of making the caller wrap the calls in
splimp()/splx().
- Add a comment documenting that interrupts that cause memory allocation
must be blocked while the free page queue is locked.
Since interrupts must be blocked while this lock is asserted, tying them
together like this helps to prevent mistakes.
end of the mappable kernel virtual address space. Previously, it would
get called more often than necessary, because the caller only new what
was requested.
Also, export uvm_maxkaddr so that uvm_pageboot_alloc() can grow the
kernel pmap if necessary, as well. Note that pmap_growkernel() must
now be able to handle being called before pmap_init().
releasing any swap resources. if we don't do this, we can
end up with a clean, swap-backed page, which is illegal.
tracked down by Bill Sommerfeld, fixes PR 7578.
the child inherits the stack pointer from the parent (traditional
behavior). Like the signal stack, the stack area is secified as
a low address and a size; machine-dependent code accounts for stack
direction.
This is required for clone(2).
uvmspace_fork().
pmap_fork() is used to "fork a pmap", that is copy data from one pmap
to the other that is NOT related to actual mappings in the pmap, but is
otherwise logically coupled to the address space.
define a flag UVM_PGA_USERESERVE to allow non-kernel object
allocations to use pages from the reserve.
use the new flag for allocations in pmap modules.
memory access a mapping was caused by. This is passed through from uvm_fault()
and udv_fault(), and in most other cases is 0.
The pmap module may use this to preset R/M information. On MMUs which require
R/M emulation, the implementation may preset the bits and avoid taking another
fault. On MMUs which keep R/M information in hardware, the implementation may
preset its cached bits to speed up the next call to pmap_is_modified() or
pmap_is_referenced().
numerous pagedaemon improvements were needed to make this useful:
- don't bother waking up procs waiting for memory if there's none to be had.
- start 4 times as many pageouts as we need free pages.
this should reduce latency in low-memory situations.
- in inactive scanning, if we find dirty swap-backed pages when swap space
is full of non-resident pages, reactivate some number of these to flush
less active pages to the inactive queue so we can consider paging them out.
this replaces the previous scheme of inactivating pages beyond the
inactive target when we failed to free anything during inactive scanning.
- during both active and inactive scanning, free any swap resources from
dirty swap-backed pages if swap space is full. this allows other pages
be paged out into that swap space.
and system call now just return EFAULT). A complete fix will
presumably have to wait for UBC and/or for vnode locking protocols to
be revamped to allow use of shared locks.
offset and size of the requested region to be mapped, so that the
udv_attach() can use the device d_mmap() entry to check mappability
of the requested region.
- break anon related functions out of uvm_amap.c and put them in their own
file (uvm_anon.c). includes break up uvm_anon_init into an amap and an
an anon init function
- ensure that only functions within the amap module access amap structure
fields (add macros to amap api as needed)
thing would be to allocate the block, but I don't know how to do this.
The panic is preferable to the random memory corruption the old code
was causing.