Commit Graph

30378 Commits

Author SHA1 Message Date
leo
88efdc6951 Add support for the memory on the CT2 board. Info from Thomas Goirand. 1999-05-27 09:08:25 +00:00
mark
ccb5b09c4f Grr. Nuke the remaining allocsys() prototypes. 1999-05-27 09:08:09 +00:00
mark
99804cc3e7 Nuke old prototype of allocsys(). 1999-05-27 09:04:11 +00:00
nisimura
9b5cf7b4de - Save MIPS1 only configuration for 3min, pointed by Erik Bertelsen
<erik@mediator.uni-c.dk>
1999-05-27 06:43:50 +00:00
ragge
f558ec11e2 Directory called qbus instead of uba, per request from Matt/Jason/...
(More describing name actually)
1999-05-27 03:45:21 +00:00
mrg
c346b0ace0 regen. 1999-05-27 03:05:32 +00:00
mrg
8c83c41668 add the UltraSPARC IIi PCI interface 1999-05-27 02:51:19 +00:00
nisimura
58cf81db34 - Change a symbolic name of TLB entrylo from 'PG_M' to 'PG_D' to reflect
processor design.  MIPS 'dirty bit' is not the same as i386 'dirty bit'.
There is a growing concern of misuse in NetBSD/mips.
1999-05-27 01:56:32 +00:00
thorpej
80de1e9903 Upon further investigation, in uvm_map_pageable(), entry->protection is the
right access_type to pass to uvm_fault_wire().  This way, if the entry has
VM_PROT_WRITE, and the entry is marked COW, the copy will happen immediately
in uvm_fault(), as if the access were performed.
1999-05-26 23:53:48 +00:00
thorpej
beb8d06638 Generally update the comment above vunmapbuf(). 1999-05-26 22:19:33 +00:00
thorpej
a2d06a4721 Generally update the comment above the vmapbuf() implementations. 1999-05-26 22:07:36 +00:00
thorpej
6b655611b1 Wired kernel mappings are wired; pass VM_PROT_READ|VM_PROT_WRITE for
access_type to pmap_enter() to ensure that when these mappings are accessed,
possibly in interrupt context, that they won't cause mod/ref emulation
page faults.
1999-05-26 19:27:49 +00:00
thorpej
985f839e63 Add a comment explaining why INTRSAFE isn't needed here. 1999-05-26 19:23:13 +00:00
thorpej
2580d306ab Change the vm_map's "entries_pageable" member to a r/o flags member, which
has PAGEABLE and INTRSAFE flags.  PAGEABLE now really means "pageable",
not "allocate vm_map_entry's from non-static pool", so update all map
creations to reflect that.  INTRSAFE maps are maps that are used in
interrupt context (e.g. kmem_map, mb_map), and thus use the static
map entry pool (XXX as does kernel_map, for now).  This will eventually
change now these maps are locked, as well.
1999-05-26 19:16:28 +00:00
leo
48627f3c5d Make this compile again. I assumed that the use of 'format_memory()' was
an error? Made it use format_bytes()....
1999-05-26 14:29:34 +00:00
thorpej
00a1f75cf6 In uvm_pagermapin(), pass VM_PROT_READ|VM_PROT_WRITE as access_type, to
ensure we don't take mod/ref emulation faults in an interrupt context
(e.g. during the i/o operation).  This is safe because:
	- For a pageout operation, the page is already known to be
	  modified, and the pagedaemon will pmap_clear_modify() after
	  the pageout has completed.
	- For a pagein operation, pagers must already pmap_clear_modify()
	  after the pagein operation is complete, because the i/o may have
	  been done with e.g. programmed i/o.
XXX It would be nice to know the i/o direction so that we can call
XXX pmap_enter() with only the protection and access_type necessary.
1999-05-26 06:42:57 +00:00
cgd
5dd4815be9 uh, get the port name consistently correct (NetBSD/alpha, not NetBSD/Alpha) 1999-05-26 06:22:03 +00:00
nisimura
1922144399 - Remove now obsolete function declaration. 1999-05-26 04:27:27 +00:00
nisimura
d7a56fd7fa - Continue on branch merge, incrementally.
- Catch up filename changes escaped to be fixed.
1999-05-26 04:23:58 +00:00
ragge
12b6c6d04f DL-11 driver bus'ified. UNTESTED. 1999-05-26 02:01:49 +00:00
ragge
e805bfec25 DZ-11 routines bus'ified. Small fixes to uba routines. 1999-05-26 01:26:17 +00:00
thorpej
88214f79d2 Pass the appropriate access type for uvm_vslock() based on the FP operation:
fpread == VM_PROT_READ|VM_PROT_WRITE, fpwrite == VM_PROT_READ.
1999-05-26 01:09:02 +00:00
thorpej
701868a6c8 Pass the appropriate access_type to uvm_vslock() for the given physio
operation: B_READ == VM_PROT_READ|VM_PROT_WRITE, B_WRITE == VM_PROT_READ.
1999-05-26 01:08:03 +00:00
thorpej
497248ca55 XXX Pass VM_PROT_NONE to uvm_vslock() as access_type. Why are we even
vslocking here?!  copyout() on its own seems to suffice just about everwhere
else, and it's not like the process is going to exit; it's in a system
call!
1999-05-26 01:07:06 +00:00
thorpej
b2e9c635ec Pass an access_type to uvm_vslock(). 1999-05-26 01:05:24 +00:00
thorpej
40aacb6eb5 No longer need to pmap_modified_emulation() in cpu_swapin(), since
uvm_fault_wire() does the right thing with access_type.

XXX Was there a bug here?  pmap_modified_emulation() *wasn't* done in
XXX cpu_fork()!!
1999-05-26 00:40:20 +00:00
thorpej
32de988d29 No longer need to pmap_emulate_reference() in cpu_fork() or cpu_swapin(),
since uvm_fault_wire() does the right thing with access_type.
1999-05-26 00:37:40 +00:00
thorpej
7b4db806b6 In uvm_map_pageable(), pass VM_PROT_NONE as access type to uvm_fault_wire()
for now.  XXX This needs to be reexamined.
1999-05-26 00:36:53 +00:00
thorpej
9d0ea0969e - uvm_fork()/uvm_swapin(): pass VM_PROT_READ|VM_PROT_WRITE access_type
to uvm_fault_wire(), to guarantee that the kernel stacks will not
  cause even a mod/ref emulation fault.
- uvm_vslock(): pass VM_PROT_NONE until this function is updated.
1999-05-26 00:33:52 +00:00
thorpej
195c1a2741 Pass an access_type to uvm_fault_wire(), which it forwards on to
uvm_fault().
1999-05-26 00:32:42 +00:00
thorpej
5b897f96c5 msgbuf doesn't need VM_PROT_EXEC. 1999-05-25 23:31:14 +00:00
thorpej
ed0aee1190 When mapping bus space, we can use pmap_kenter_pa(), since the pages
are never managed.
1999-05-25 23:30:27 +00:00
thorpej
774ceea703 msgbuf doesn't need VM_PROT_EXEC, biostramp does. 1999-05-25 23:19:00 +00:00
thorpej
5832084eaf bus_dmamem_map() maps DMA safe memory, which is usually one or more
managed pages, into KVA space.  Since the pages are managed, we should
use pmap_enter(), not pmap_kenter_pa().

Also, when entering the mappings, enter with an access_type of
VM_PROT_READ | VM_PROT_WRITE.  We do this for a couple of reasons:

	(1) On systems that have H/W mod/ref attributes, the hardware
	    may not be able to track mod/ref done by a bus master.

	(2) On systems that have to do mod/ref emulation, this prevents
	    a mod/ref page fault from potentially happening while in an
	    interrupt context, which can be problematic.

This latter change is fairly important if we ever want to be able to
transfer DMA-safe memory pages to anonymous memory objects; we will need
to know that the pages are modified, or else data could be lost!

Note that while the pages are unowned (i.e. "just DMA-safe memory pages"),
they won't consume any swap resources, as the mappings are wired, and
the pages aren't on the active or inactive queues.
1999-05-25 23:14:03 +00:00
thorpej
f197609d93 Add some DIAGNOSTIC checks for inconsistent use of pmap_enter/pmap_kremove. 1999-05-25 20:33:33 +00:00
thorpej
986c3eca39 Fix some major locking protocol issues related to pmap_kremove() having
to deal with PG_PVLIST mappings; it no longer has to.  Add some DIAGNOSTIC
checks for inconsistent use of pmap_enter/pmap_kremove.
1999-05-25 20:32:29 +00:00
thorpej
0ff8d3ac1a Define a new kernel object type, "intrsafe", which are used for objects
which can be used in an interrupt context.  Use pmap_kenter*() and
pmap_kremove() only for mappings owned by these objects.

Fixes some locking protocol issues related to MP support, and eliminates
all of the pmap_enter vs. pmap_kremove inconsistencies.
1999-05-25 20:30:08 +00:00
nisimura
fec53c08d6 - Use dev_name2blk[] array prepared by config(8) instead of handcrafting
local data.
1999-05-25 09:32:27 +00:00
nisimura
058f5a1517 - Continue to import a development branch.
- Minor formatting fix.
1999-05-25 07:37:08 +00:00
nisimura
014ba724c0 - Rework spl(9) implementation. Use _spl*() processor mask manipulating
routines now reside in locore.S.  No functional difference is expected.
- Replace abused splx() abuse with _splset() to change MIPS processor
interrupt mask bit.  'mips/trap.c' side will be fixed soon.
1999-05-25 04:17:57 +00:00
thorpej
789c9e7c48 Add a comment explaining why using pmap_kenter_pa() is safe here. 1999-05-25 01:34:13 +00:00
tron
bb5689beb3 Only attempt to remove symbol table from DDB's lists of symbol tables
if we really loaded one.
1999-05-25 00:16:08 +00:00
thorpej
85f8d1343c Macro'ize the test for "object is a kernel object". 1999-05-25 00:09:00 +00:00
thorpej
9b731fd45c Remove a comment in uvm_pager_dropcluster() about PMAP_NEW and mod/ref
attributes for the page; it no longer applies, since we don't use
pmap_kenter_pgs() anymore.
1999-05-24 23:36:23 +00:00
thorpej
7becac6b9a Don't use pmap_kenter_pgs() for entering pager_map mappings. The pages
are still owned by the object which is paging, and so the test for a kernel
object in uvm_unmap_remove() will cause pmap_remove() to be used instead
of pmap_kremove().

This was a MAJOR source of pmap_remove() vs pmap_kremove() inconsistency
(which caused the busted kernel pmap statistics, and a cause of much
locking hair on MP systems).
1999-05-24 23:30:44 +00:00
tron
a602deb000 Silently ignore attributes modifications on directories. Fixes
PR kern/7630 by Markus Kurek.
1999-05-24 23:01:13 +00:00
thorpej
cba22525ce Fix some broken packet length checks. Really (no, I mean really) works now
after the ether_input() changes -- tested on my Quadra 650.
1999-05-24 21:53:42 +00:00
ragge
7fb0d17b38 First step towards MI Unibus/Q22 bus code. 1999-05-24 20:12:57 +00:00
tron
770a66b9a8 Fix kernel builds if ppp interface but no bpf filters are configured.
Patch supplied by Takahiro Kambe in PR kern/7639, also fixes PR kern/7632
by Bjoern Labitzke.
1999-05-24 20:12:10 +00:00
thorpej
5dec34efed The kernel pmap can be accessed (and locked!) while in an interrupt
context, so we must block interrupts which may cause memory allocation
before asserting the kernel pmap's lock.  Put this all in PMAP_LOCK()
and PMAP_UNLOCK() macros to make it easier.
1999-05-24 20:11:58 +00:00