has already been called and thus we know the values the timers are using.
This also ensures that clock_sc will always be valid when we try and use
it to read the timer registers.
Fixes PR7357.
define a flag UVM_PGA_USERESERVE to allow non-kernel object
allocations to use pages from the reserve.
use the new flag for allocations in pmap modules.
dereference it when doing handled or modified emulation. This can happen
if fault are taken during early parts of the bootstrap, resulting in
recursive faulting.
* Count page table pages in the RSS.
* Rather than patching it up, panic if access_type has bits not in prot, as
this should now be impossible.
* Add page table reference counting, but currently disabled as it still has
some issues.
When we put a page on the collection list, we must subtract NPVPPG from the
total free count: one for each pv_entry that's free in that page, and one for
each free pv_entry in other pages that we're going to eat by moving the ones
in the page being collected.
pmap_find_pv(), pmap_clean_page() and pmap_remove_all() are only called on
managed pages, after VM initialization. Panic if this invariant is violated.
Also, panic if we try to enter a PT page through pmap_enter(), rather than
silently patching it up.
pmap_initialized is now #ifdef DIAGNOSTIC.
* Map the message buffer with access_type = VM_PROT_READ|VM_PROT_WRITE `just
because'.
* Map the file system buffers with access_type = VM_PROT_READ|VM_PROT_WRITE to
avoid possible problems with pagemove().
* Do not use VM_PROT_EXEC with either of the above.
* Map pages for /dev/mem with access_type = prot. Also, DO NOT use
pmap_kenter() for this, as we DO NOT want to lose modification information.
* Map pages in dumpsys() with VM_PROT_READ.
* Map pages in m68k mappedcopyin()/mappedcopyout() and writeback() with
access_type = prot.
* For now, bus_dma*(), pmap_map(), vmapbuf(), and similar functions still use
access_type = 0. This should probably be revisited.
emulation of managed pages. This required the following `interesting' changes:
* File system buffers must be entered with an access type of
VM_PROT_READ|VM_PROT_WRITE, so that the pages will be accessible immediately.
Otherwise we would have to teach pagemove() to update the R/M information.
Since they're never eligible for paging, the latter is overkill.
* We must insure that pages allocated before the pmap is completely set up
(that is, pages allocated early by the VM system) are not eligible for R/M
emulation, since the memory needed for this isn't available. We do this by
allocating the pmap's internal memory with uvm_pageboot_alloc(). This also
fixes an absolutely horrible hack where the pmap only worked because page 0
happened to be mapped.
to be mapped.
Also:
* Push the wired page counting into the p->v list maintenance functions. This
avoids code duplication, and fixes some cases where we were confused about
which pages to do it with.
* Fix lots of problems associated with pmap_nightmare() (and rename it to
pmap_vac_me_harder()).
* Since the early pages are no longer considered `managed', just make
pmap_*_pv() panic if !pmap_initialized.
"BUS_SPACE_ALIGNED_POINTER()".
Equal to the param.h "ALIGNED_POINTER()" normally, but obeys additional
requirements of the bus_space_xxx_n() macros. (BUS_SPACE_DEBUG)
PTEs is ignored when in kernel mode. Hack around this just like we do on the
i386, by adding a prepass to copyout() to check for write permission on the
destination pages.
* Don't bother pulling PT_M and PT_H bits from pv_flags; they can't ever be
set there!
* Actually make pmap_clear_reference() do something useful.
* Also set the referenced bit (PT_H) when emulating a write fault.
If we're doing modified bit emulation, we must revoke write permission in
pmap_clear_modify(). This is non-negotiable. I will revoke write permission
in pmap_clear_modify(), or suffer the wrath of a thousand bricks.
there are no pages available immediately to use as a PT page, don't
just roll over and die. Only do that if we're the kernel pmap (shouldn't
happen very ofter [ever?!], and I'm not certain we're guaranteed valid
thread context when operating on the kernel pmap). For user pmaps, wait
for the pagedaemon to wake us up when more free pages are available.