define a flag UVM_PGA_USERESERVE to allow non-kernel object
allocations to use pages from the reserve.
use the new flag for allocations in pmap modules.
When we put a page on the collection list, we must subtract NPVPPG from the
total free count: one for each pv_entry that's free in that page, and one for
each free pv_entry in other pages that we're going to eat by moving the ones
in the page being collected.
* Map the message buffer with access_type = VM_PROT_READ|VM_PROT_WRITE `just
because'.
* Map the file system buffers with access_type = VM_PROT_READ|VM_PROT_WRITE to
avoid possible problems with pagemove().
* Do not use VM_PROT_EXEC with either of the above.
* Map pages for /dev/mem with access_type = prot. Also, DO NOT use
pmap_kenter() for this, as we DO NOT want to lose modification information.
* Map pages in dumpsys() with VM_PROT_READ.
* Map pages in m68k mappedcopyin()/mappedcopyout() and writeback() with
access_type = prot.
* For now, bus_dma*(), pmap_map(), vmapbuf(), and similar functions still use
access_type = 0. This should probably be revisited.
"BUS_SPACE_ALIGNED_POINTER()".
Equal to the param.h "ALIGNED_POINTER()" normally, but obeys additional
requirements of the bus_space_xxx_n() macros. (BUS_SPACE_DEBUG)
cache). This helps to avoid cache alias problems, and can improve performance
in the case where physical pages have multiple mappings (since the pages will
not have to be marked cache-inhibited).
minor of libc and the major of libutil). For little-endian architectures
merge the bnswap() assembly versions with nto* and hton* using symbols
aliasing. Use symbol renaming for the bswap function in this case to avoid
namespace pollution.
Declare bswap* in machine/bswap.h, not machine/endian.h. For little-endian
machines, common code for inline macros go in machine/byte_swap.h
Sync libkern with libc.
Adjust #include in kernel sources for machine/bswap.h.
mappings for CADDR1 and CADDR2, unless we're built w/ -DDEBUG. Since
these addresses are only used in these routines, we can "lazily invalidate"
them by just using them again. This makes these routines a teeny bit faster,
as it saves 1 or 2 TBIS's per call.
Suggested by Leo Weppelman <leo@netbsd.org>.
pmap, forget all of the mappings for the user address space for that pmap.
This causes the PT pages to be freed so that they can be reclaimed by the
VM system. [*]
[*] Actually, in the current implementation, it merely causes the wiring
count on the PT pages to drop to 0, which allows them to be reclaimed by
the pagedaemon. Handling of PT pages needs to be completely rewritten.
pmap_remove() for the temporary addresses. This is completely unnecessary
as the temps are used ONLY for these routines, and we have better control
over the mapping by manipulating the PTE ourself. On VAC systems, cache-
inhibit the mapping to prevent wasting a cache load.
These routines are now significantly faster.
Add a DIAGNOSTIC check in pmap_enter() for kernel pmap and (CADDR1 or CADDR2);
nothing should be adding mappings there via that interface.