Commit Graph

39 Commits

Author SHA1 Message Date
matt 830666e31e Clean the icache for pages when they are entered as executable and before
they were either not mapped at all or mapped as non-executable.  Round
memory regions in pmap_bootstrap.
2002-04-03 00:12:07 +00:00
thorpej a180cee23b Pool deals fairly well with physical memory shortage, but it doesn't
deal with shortages of the VM maps where the backing pages are mapped
(usually kmem_map).  Try to deal with this:

* Group all information about the backend allocator for a pool in a
  separate structure.  The pool references this structure, rather than
  the individual fields.
* Change the pool_init() API accordingly, and adjust all callers.
* Link all pools using the same backend allocator on a list.
* The backend allocator is responsible for waiting for physical memory
  to become available, but will still fail if it cannot callocate KVA
  space for the pages.  If this happens, carefully drain all pools using
  the same backend allocator, so that some KVA space can be freed.
* Change pool_reclaim() to indicate if it actually succeeded in freeing
  some pages, and use that information to make draining easier and more
  efficient.
* Get rid of PR_URGENT.  There was only one use of it, and it could be
  dealt with by the caller.

From art@openbsd.org.
2002-03-08 20:48:27 +00:00
chs 9451559ef4 pmap_page_protect(VM_PROT_NONE) must remove all mappings in the PV list,
even if they are wired.  we need to be able to remove all mappings to
pages that are being freed due to (eg.) file truncation.
2002-01-02 00:51:33 +00:00
matt 34d4887431 Some #ifdef cleanup for DIAGNOSTIC/DEBUG/PMAPCHECK so that that many of
the expensive checks are skipped when (!DEBUG&&!PMAPCHECK) and all of the
light-weigth checks are skipped when (!DIAGNOSTIC&&!DEBUG&&!PMAPCHECK).
This bring pmap.o's text down from 21KB (with PMAPCHECK) to 18.5KB (DEBUG)
to 16KB text (!DIAGNOSTIC).
2001-11-14 20:38:22 +00:00
matt ab93af26ea Fix pte_clear to TLB flush the va, not the tlb adress (which is only valid
for clearing the ref bit).  pvo_to_pte (if !diagnostic) will return NULL
immediately if PTE_VALID is not set.
2001-11-11 23:07:02 +00:00
matt 1f09ca6e53 Fix a small buglet in syncicache (if the area to sync crosses the
segment 0 boundary).
2001-11-06 06:25:28 +00:00
matt a696291eab Fix bug in pte_spill (wasn't searching the right pvo_table list for the
victim PTE is the PTE was a secondary entry).
2001-11-05 06:44:11 +00:00
matt 4f3943d89a Test the right bit for wired in the PVO. 2001-11-05 06:24:55 +00:00
matt f02b548314 Don't try to pool_putting a PVO when re-entering a mapping. Since the
PVO_MANAGED may get munged, we can possible put this into the wrong pool.
2001-11-05 01:25:38 +00:00
matt 8a49af3cec Need to use a separate variable for return value of pmap_pte_inset in
pte_spill.  Make off the high bit of the MFTB().
2001-11-04 22:39:08 +00:00
matt 3ca8d91fc8 Add few a more PMAP_PVO_CHECKs in pte_spill; print pte addr of unmatched
pte in panicstr.
2001-11-04 21:15:03 +00:00
matt f2ceecb472 In pmap_syncicache, preserve the page offset contained in the supplied
physical address.
2001-10-18 01:03:44 +00:00
wiz 456dff6cb8 Spell 'occurred' with two 'r's. 2001-09-16 16:34:23 +00:00
chs 1661137341 it's perfectly legal for pmap_extract() on the kernel pmap to not find
anything mapped there, even though it never used to happen.  with today's
other changes it happens a lot now, so remove the debug check for it.
2001-09-16 05:40:46 +00:00
chris 0e7661f023 Update pmap_update to now take the updated pmap as an argument.
This will allow improvements to the pmaps so that they can more easily defer expensive operations, eg tlb/cache flush, til the last possible moment.

Currently this is a no-op on most platforms, so they should see no difference.

Reviewed by Jason.
2001-09-10 21:19:08 +00:00
matt 8402e4d93f Fix a missing restore interrupt. disable interrupts around pvo_enter in
pmap_kenter.  Shouldn't be needed but ...
2001-09-09 04:35:22 +00:00
matt 04bdd02c1a Make pmap_pte_insert STATIC so it will show up in DEBUG kernel with DDB
traces.
2001-08-30 22:06:44 +00:00
matt d98bf76f56 Fix bootstrap loss of memory. Fix pmap_activate problem. Revamp debug
messages.
2001-08-22 22:17:57 +00:00
matt f3011c96b4 Fix thinko. Do the mask before the divide. 2001-08-08 21:09:58 +00:00
wiz c5a6be17f4 bcopy -> memcpy, bzero -> memset, bcmp -> memcmp.
Reviewed by Matt Thomas, ok'd by Tsubai Masanari.
2001-07-22 11:29:44 +00:00
matt 454a630dbd Print both the lower and upper dbat register when printing dbat registers. 2001-06-30 02:03:16 +00:00
matt 7c5977ea4f Fix a spurious debug printf.
Fix pmap_procwr to not check a NULL pvo.  (Duh!)
Reformat pmap_print_mmuregs.  Actually *fill in* the dbat registers.
2001-06-30 01:21:24 +00:00
matt 6ca9622494 Add pmap_interrupt_* to pmap_*map_pa. Remove interrupt toggling from
pmap_pte_spill.  Fix pmap_protect.  Macroize mfsrin instruction.
2001-06-28 20:35:21 +00:00
matt 7effaaaa7c Disable interrupts when dealing with pvo lists. clean up some things.
Keep track of executable ness of pages.  Of sync icache executable pages.
2001-06-23 03:17:32 +00:00
matt 6d3037579c Change a debugging message a bit. 2001-06-21 22:05:50 +00:00
matt 756d684c5a Rename/enumerate the PTE protection bits to their real purposes. 2001-06-21 18:03:37 +00:00
matt 467c0ed022 Rework pmap_bootstrap. Fix some comments. Add old copyright until i finish
excising that code.
2001-06-21 03:26:12 +00:00
matt 38fc9e283d Fix pte_spill to set the index on the proper pvo. Deal with recursion
in pmap_syncicache.
2001-06-16 03:32:48 +00:00
matt 979edf3c4a pmap_syncicache can be called recursively. Properly deal with that
situation.
2001-06-15 22:28:54 +00:00
matt e55c9f74af Add missing braces in pmap_pte_to_pvo (DEBUG|PMAPCHECK defined). Rearrange
some code so that consistency check in pmap_pte_to_pvo do not trigger on
false positives.  Correct/enhance some printfs.
2001-06-15 21:29:54 +00:00
matt 25a2c4d481 While not stricly needed, to match pmap_pvo_find_va, mask of the page
offset bits.
2001-06-15 20:53:45 +00:00
matt 787e1b0b36 When comparing VA's, ignore the page offset bits.
Invert and strengthen a test for pte equality.
2001-06-15 20:43:01 +00:00
matt c7c7dab8f1 Stop overloading unused bits in the pte. Use the low 12bits of the vaddr
instead to store them.  Add a macro to fetch the vaddr without them.
Make all variables/routines prefixed with pmap_
Cleanup & fix some of the vsid bitmap usage.
Cleanup DEBUG printfs.  Add some more checks to pmap_pvo_to_pte.
2001-06-15 18:26:06 +00:00
matt 192642af05 Don't enable PMAPCHECK by default. 2001-06-15 08:17:00 +00:00
matt 0278444e19 Add a check to pvo_check which makes sure the pte is really in the
pteg_table.   In pte_to_va, take into account if the PTE_HID is set.
2001-06-15 08:08:04 +00:00
matt 816a5637cd When releasing the SR VSID, mask off the bits not related to the index
in the pmap vsid bitmap.
2001-06-15 06:27:07 +00:00
matt 66822e55be Fix a spl issues. Turn on PMAPCHECK until instability problems are found.
Add a pmap_pvo_verify call you call it from it ddb and verify the pmap
data structures are sound.  Fix warnings when DEBUG was turned on.
2001-06-10 07:56:36 +00:00
matt 75a27eccff Rename pte_spill to pmap_pte_spill. Fix pmap_clear_{referenced,modify}
to return the previous state of the bit.  Make it compile under
-Wmissing-prototypes -Wall.  Change around some debug stuff.
2001-06-08 00:16:24 +00:00
matt 938edd5b75 Introduce a new & faster pmap for the MPC6xx (60x, 7xx, 7xxx) PPC CPUs.
Move MPC6xx dependent header files to powerpc/include/mpc6xx/
2001-06-06 17:36:01 +00:00