Commit Graph

1256 Commits

Author SHA1 Message Date
jonathan d3d99a28c9 Update #ifdef'ed-out changes from pmax :
* Add more #ifdef pmax/#endif, #ifdef alpha/#endif where appropriate.
    Config and heade files need more work (or replacement)
  change TK_NOTYET to HAVE_RCONS
  change commented-out /* && cn_tab.cn_screen */ to && raster_console()
  * Add DDB hooks.
  * Note where Alpha console ignores carrier on consoles.
  * Add pmax-derived console tty-size code inside HAVE_RCONS
  * Fold in gross pmax rcons-input hooks, inside HAVE_RCONS

Untested, but whitespace/ifdef only,  cross-compiles OK,
preprocessing shows no significat differences (famous last words)
1998-03-22 08:24:52 +00:00
thorpej a5260cbda0 Use atomic set/clear bits in pmap_activate()/pmap_deactivate(). 1998-03-22 07:27:54 +00:00
thorpej fcfe2f1539 Implement a set of `atomic' (using load-locked and store-conditional)
operations.  Initial set includes:

alpha_atomic_setbits_q()	set bits in a quad
alpha_atomic_clearbits_q()	clear bits in a quad
1998-03-22 07:26:32 +00:00
thorpej 9db8ae93c8 - The pmap now includes support for ASNs. We no longer need to flush
the TLB and I-cache in the SWITCH_CONTEXT macro.
- Right after switching to proc0's newly-created context at startup time,
  flush the TLB and I-cache; this is the only place where it's not done
  automatically.
- Fix a nasty bug in a critical section of cpu_switch(); change the
  pmap_activate -> SWITCH_CONTEXT -> pmap_deactivate sequence to
  pmap_deactivate -> pmap_activate -> SWITCH_CONTEXT.  This prevents
  erroneously marking a pmap inactive if switching to a process that
  shares it's address space (and thus its pmap) with the oldproc!  Noticed
  by Chris Demetriou.
1998-03-22 05:46:02 +00:00
thorpej c0cc1ed476 Implement support for Address Space Numbers, greatly reducing the number
of TLB and I-cache flushes, significantly speeding up context switches.

Once again, many thanks to Chris Demetriou and Ross Harvey for code
review and debugging assistance!
1998-03-22 05:41:37 +00:00
thorpej a8d86e5a7c Replace PMAP_ASNGEN_INVALID with PMAP_ASN_RESERVED. 1998-03-22 05:39:50 +00:00
mjacob 86b6520e41 more TS_WOPEN to tp->t_wopen changes 1998-03-21 23:36:19 +00:00
mycroft 0dae91d9af Eliminate uses of TS_WOPEN in hard-wired devices. 1998-03-21 22:52:59 +00:00
mjacob 34f87569b9 add some error defintions 1998-03-21 22:02:42 +00:00
thorpej 66d8f5b544 sync systypes w/ <machine/rpb.h> 1998-03-20 21:48:21 +00:00
thorpej 63c73f94e3 Add a few more systypes. 1998-03-20 21:48:03 +00:00
thorpej 0f95ffdc1d Nuke swpctxt(); it's only used by the Mach pmap, which we will only ever
use for reference.
1998-03-19 06:44:25 +00:00
thorpej 003c50d1d5 Add a macro to invalidate the TLB for a given pmap/va pair. TLB
invalidation algorithm:

	if (old mapping had PG_ASM set || pmap is active) {
		TIBS(va);
		if (also sync I-stream)
			imb();
	}

The check for "old mapping had PG_ASM" will get all kernel mappings (since
kernel mappings always have PG_ASM set).

This allows us to remove the bogus check for the kernel pmap in
active_pmap() - do so.

Use the new TLB invalidation macro whenever such action is needed.
1998-03-18 23:55:25 +00:00
thorpej 15adb17803 Eliminate the last argument from pmap_remove_mapping(); it makes its own
decisions about TLB invalidation.
1998-03-18 23:11:44 +00:00
thorpej 7ee4af11a7 Change active_pmap() to use the CPU mask (XXX and check for kernel pmap
as well, until some other changes are made).  Nuke active_user_pmap(),
and change the places that used it to use active_pmap() instead (as well
as make some DIAGNOSTIC consistency checks).
1998-03-18 22:50:50 +00:00
thorpej 605472f676 Optimize out a TLB invalidation in a common case of pmap_enter(): if
the PTE was previously invalid, no TLB invalidation is necessary because:

	(1) when a PTE is invalidated, its entry is flushed from the
	    TLB

	(2) the PALcode won't install an invalid PTE into the TLB.
1998-03-18 22:13:58 +00:00
thorpej cfdf9a95ad Keep track of which CPUs are using a pmap by setting/clearing bits
in the pmap's CPU mask in pmap_activate()/pmap_deactivate().
1998-03-18 21:57:03 +00:00
thorpej 43614761e3 In cpu_exit() deactivate the address space before freeing the vmspace
structure.  We will continue to run on this context (which is the
global Lev1map at this point) right up until we switch to proc0's
context in switch_exit().
1998-03-18 20:38:07 +00:00
thorpej 87eb2cfded Don't call pmap_deactivate() if we jumped into the middle of cpu_switch()
from switch_exit(), since by this time, the vmspace will have already
been deactivated and freed.
1998-03-18 20:36:13 +00:00
thorpej b637a998f4 Add ASN housekeeping and a CPU mask to the pmap. 1998-03-18 19:39:23 +00:00
thorpej 961a955498 Move the "are we active" macros out of the header file. 1998-03-18 19:27:46 +00:00
thorpej d37acae24c Add a DIAGNOSTIC checks for the kernel pmap in pmap_create_lev1map()
and pmap_destroy_lev1map().  Correct a comment in another DIAGNOSTIC
panic.
1998-03-18 19:21:50 +00:00
thorpej 06b49b8f3e Change a couple of assert()s to DIAGNOSTIC panics. 1998-03-18 19:12:57 +00:00
thorpej 438599b408 Correct a comment in pmap_bootstrap(). 1998-03-18 19:04:42 +00:00
thorpej 56e004c995 Pass the max ASN from the HWRPB to pmap_boostrap(). 1998-03-18 19:02:49 +00:00
thorpej 426d2953f5 Add a macro to test if PG_ASM (Address Space Match) is set in a PTE. 1998-03-18 19:00:15 +00:00
bouyer 9f50fca1fd Add commented out "options FFS_EI" 1998-03-18 16:34:41 +00:00
thorpej 9c1e8fc2ed Implement the PMAP_NEW interface for UVM. 1998-03-17 05:15:24 +00:00
thorpej 1477f77353 Properly depend on the PMAP_NEW option. 1998-03-17 05:00:18 +00:00
thorpej 6bbfd3e9ff Use pmap_kenter_pa() in _bus_dmamem_map() if PMAP_NEW. 1998-03-17 04:59:36 +00:00
thorpej f8cff5ab23 Add a software PTE bit that indicates that a va -> pa mapping was entered
in the physical->virtual list.
1998-03-17 04:53:43 +00:00
thorpej 00452b441f Move PTE-related constants here, and make them not depend on a hard-coded
page size (i.e. use the one initialized from the HWRPB at boot time).

Do a bit of cleanup while here, rendering old inherited constants obsolete.
1998-03-12 07:29:21 +00:00
thorpej 4d8723232d Garbage-collect a bunch of constants that were inherited, but are no
longer necessary or make sense.
1998-03-12 07:28:07 +00:00
thorpej bd3c0e36cf Garbage-collect this a bit. 1998-03-12 06:47:11 +00:00
thorpej 00a597fe92 Use vm_page_alloc1() and vm_page_free1() as appropriate. 1998-03-12 06:27:36 +00:00
thorpej 1f8d640c4b Bump maxusers to 64. 1998-03-12 06:04:47 +00:00
thorpej 6e6e2d7ebf Dump maxusers to 32. 1998-03-12 06:04:31 +00:00
thorpej d9a1f8ba36 Adjust the default and low-bound maxusers, now that the pmap can deal. 1998-03-12 06:04:14 +00:00
thorpej dfe0937a7e If not DEBUG, use the Virtual Page Table to get the PTE for kernel mappings
in pmap_enter() and pmap_emulate_reference().
1998-03-12 02:59:22 +00:00
thorpej 30766180a3 Nuke these; they are long-since obsolete. 1998-03-12 01:28:01 +00:00
thorpej 152a4bfa60 Increase the maximum userspace address to 4TB. Leave the stack at 8G
for now, but make a note that we might want to move it down to 4G later.
1998-03-12 01:25:52 +00:00
thorpej e046925c3a Massive cleanup and partial rewrite of the NetBSD/alpha pmap module.
Major change is that page table page management has been completely
rewritten.  Page tables are now accessed via K0SEG (no more KVA space
wasted on user page tables), and a much larger user address space is
supported.

Many thanks to Chris Demetriou and Ross Harvey for helpful insight and
debugging assistance.
1998-03-12 01:24:52 +00:00
thorpej 900e1c90bd Nuke ALPHA_STSIZE and ALPHA_MAX_PTSIZE. Add macros to compute and operate
on segments mapped by L1 and L2 PTEs.
1998-03-12 01:21:21 +00:00
thorpej 7225aae835 Move check for user-pmap-still-using-Lev1map from pmap_enter_ptpage()
to pmap_enter().
1998-03-09 22:31:23 +00:00
thorpej daa9cfae50 Don't do the Segtabzero-for-dev-zero hack. 1998-03-09 20:43:28 +00:00
thorpej e456fc0538 Simplify/speed up pagemove() somewhat by using the Virtual Page Table. 1998-03-09 20:17:03 +00:00
thorpej 78a173bada Define VPT_INDEX(), which computes the index into the Virtual Page Table
of the PTE that maps the specified virtual address.

Thanks to Chris Demetriou and Ross Harvey for clarifying the VPT.
1998-03-09 19:57:57 +00:00
thorpej b3d7fd8f3f Just use vtophys() to get the PCB phys addr. 1998-03-07 04:20:45 +00:00
thorpej 9c236919e9 Rewrite pmap_extract(), and use it as appropriate in vtophys() rather
than (almost) duplicating the code.
1998-03-07 03:37:02 +00:00
thorpej cd7d081d02 Export a pointer to the Virtual Page Table. 1998-03-07 03:15:43 +00:00