When we put a page on the collection list, we must subtract NPVPPG from the
total free count: one for each pv_entry that's free in that page, and one for
each free pv_entry in other pages that we're going to eat by moving the ones
in the page being collected.
pmap_find_pv(), pmap_clean_page() and pmap_remove_all() are only called on
managed pages, after VM initialization. Panic if this invariant is violated.
Also, panic if we try to enter a PT page through pmap_enter(), rather than
silently patching it up.
pmap_initialized is now #ifdef DIAGNOSTIC.
and dec_3maxplus.c. The ERRSYN/CHKSYN register contains data, not an address.
Pass the address of the register rather than the contents to dec_mtasic_err()
instead of the register contents so it can read/write the register.
Correctable memory errors won't trap in dec_mtasic_err() anymore.
* Map the message buffer with access_type = VM_PROT_READ|VM_PROT_WRITE `just
because'.
* Map the file system buffers with access_type = VM_PROT_READ|VM_PROT_WRITE to
avoid possible problems with pagemove().
* Do not use VM_PROT_EXEC with either of the above.
* Map pages for /dev/mem with access_type = prot. Also, DO NOT use
pmap_kenter() for this, as we DO NOT want to lose modification information.
* Map pages in dumpsys() with VM_PROT_READ.
* Map pages in m68k mappedcopyin()/mappedcopyout() and writeback() with
access_type = prot.
* For now, bus_dma*(), pmap_map(), vmapbuf(), and similar functions still use
access_type = 0. This should probably be revisited.
siop2.c. Add wide negotiation and Ultra support. Modify siop.c to match
the siop2.c sync negotiation changes. The CyberStorm MKIII driver now
supports 15 targets. Remove some old table-driven sync rate stuff from
the original Zeus driver.
emulation of managed pages. This required the following `interesting' changes:
* File system buffers must be entered with an access type of
VM_PROT_READ|VM_PROT_WRITE, so that the pages will be accessible immediately.
Otherwise we would have to teach pagemove() to update the R/M information.
Since they're never eligible for paging, the latter is overkill.
* We must insure that pages allocated before the pmap is completely set up
(that is, pages allocated early by the VM system) are not eligible for R/M
emulation, since the memory needed for this isn't available. We do this by
allocating the pmap's internal memory with uvm_pageboot_alloc(). This also
fixes an absolutely horrible hack where the pmap only worked because page 0
happened to be mapped.
to be mapped.
Also:
* Push the wired page counting into the p->v list maintenance functions. This
avoids code duplication, and fixes some cases where we were confused about
which pages to do it with.
* Fix lots of problems associated with pmap_nightmare() (and rename it to
pmap_vac_me_harder()).
* Since the early pages are no longer considered `managed', just make
pmap_*_pv() panic if !pmap_initialized.