right access_type to pass to uvm_fault_wire(). This way, if the entry has
VM_PROT_WRITE, and the entry is marked COW, the copy will happen immediately
in uvm_fault(), as if the access were performed.
access_type to pmap_enter() to ensure that when these mappings are accessed,
possibly in interrupt context, that they won't cause mod/ref emulation
page faults.
has PAGEABLE and INTRSAFE flags. PAGEABLE now really means "pageable",
not "allocate vm_map_entry's from non-static pool", so update all map
creations to reflect that. INTRSAFE maps are maps that are used in
interrupt context (e.g. kmem_map, mb_map), and thus use the static
map entry pool (XXX as does kernel_map, for now). This will eventually
change now these maps are locked, as well.
ensure we don't take mod/ref emulation faults in an interrupt context
(e.g. during the i/o operation). This is safe because:
- For a pageout operation, the page is already known to be
modified, and the pagedaemon will pmap_clear_modify() after
the pageout has completed.
- For a pagein operation, pagers must already pmap_clear_modify()
after the pagein operation is complete, because the i/o may have
been done with e.g. programmed i/o.
XXX It would be nice to know the i/o direction so that we can call
XXX pmap_enter() with only the protection and access_type necessary.
vslocking here?! copyout() on its own seems to suffice just about everwhere
else, and it's not like the process is going to exit; it's in a system
call!
to uvm_fault_wire(), to guarantee that the kernel stacks will not
cause even a mod/ref emulation fault.
- uvm_vslock(): pass VM_PROT_NONE until this function is updated.
managed pages, into KVA space. Since the pages are managed, we should
use pmap_enter(), not pmap_kenter_pa().
Also, when entering the mappings, enter with an access_type of
VM_PROT_READ | VM_PROT_WRITE. We do this for a couple of reasons:
(1) On systems that have H/W mod/ref attributes, the hardware
may not be able to track mod/ref done by a bus master.
(2) On systems that have to do mod/ref emulation, this prevents
a mod/ref page fault from potentially happening while in an
interrupt context, which can be problematic.
This latter change is fairly important if we ever want to be able to
transfer DMA-safe memory pages to anonymous memory objects; we will need
to know that the pages are modified, or else data could be lost!
Note that while the pages are unowned (i.e. "just DMA-safe memory pages"),
they won't consume any swap resources, as the mappings are wired, and
the pages aren't on the active or inactive queues.
which can be used in an interrupt context. Use pmap_kenter*() and
pmap_kremove() only for mappings owned by these objects.
Fixes some locking protocol issues related to MP support, and eliminates
all of the pmap_enter vs. pmap_kremove inconsistencies.
routines now reside in locore.S. No functional difference is expected.
- Replace abused splx() abuse with _splset() to change MIPS processor
interrupt mask bit. 'mips/trap.c' side will be fixed soon.
are still owned by the object which is paging, and so the test for a kernel
object in uvm_unmap_remove() will cause pmap_remove() to be used instead
of pmap_kremove().
This was a MAJOR source of pmap_remove() vs pmap_kremove() inconsistency
(which caused the busted kernel pmap statistics, and a cause of much
locking hair on MP systems).
context, so we must block interrupts which may cause memory allocation
before asserting the kernel pmap's lock. Put this all in PMAP_LOCK()
and PMAP_UNLOCK() macros to make it easier.