Commit Graph

206 Commits

Author SHA1 Message Date
minoura
ff8fb3ef82 Remove extra ]. 1999-06-16 17:25:39 +00:00
thorpej
ee9703dea9 Add a macro to test if a map entry is wired. 1999-06-16 00:29:04 +00:00
thorpej
c5a43ae10c Several changes, developed and tested concurrently:
* Provide POSIX 1003.1b mlockall(2) and munlockall(2) system calls.
  MCL_CURRENT is presently implemented.  MCL_FUTURE is not fully
  implemented.  Also, the same one-unlock-for-every-lock caveat
  currently applies here as it does to mlock(2).  This will be
  addressed in a future commit.
* Provide the mincore(2) system call, with the same semantics as
  Solaris.
* Clean up the error recovery in uvm_map_pageable().
* Fix a bug where a process would hang if attempting to mlock a
  zero-fill region where none of the pages in that region are resident.
  [ This fix has been submitted for inclusion in 1.4.1 ]
1999-06-15 23:27:47 +00:00
thorpej
45e5ec934e Use a more descriptive wait message for VM map locks. 1999-06-14 22:05:23 +00:00
thorpej
5de7bac9b1 Print the maps flags in "show map" from DDB. 1999-06-07 16:31:42 +00:00
thorpej
2c3dc83a64 Keep interrupt-safe maps on an additional queue. In uvm_fault(), if we're
looking up a kernel address, check to see if the address is on this
"interrupt-safe" list.  If so, return failure immediately.  This prevents
a locking screw if a page fault is taken on an interrupt-safe map in or
out of interrupt context.
1999-06-04 23:38:41 +00:00
thorpej
8c59c67288 Just say no to interrupt-safe maps. 1999-06-03 00:05:45 +00:00
thorpej
acf81da21e A page fault on a non-pageable map is always fatal. 1999-06-02 23:26:21 +00:00
thorpej
779ecdd773 Simplify the last even more; We downgraded to a shared (read) lock, so
setting recursive has no effect!  The kernel lock manager doesn't allow
an exclusive recursion into a shared lock.  This situation must simply
be avoided.  The only place where this might be a problem is the (ab)use
of uvm_map_pageable() in the Utah-derived pmaps for m68k (they should
either toss the iffy scheme they use completely, or use something like
uvm_fault_wire()).

In addition, once we have looped over uvm_fault_wire(), only upgrade to
an exclusive (write) lock if we need to modify the map again (i.e.
wiring a page failed).
1999-06-02 22:40:51 +00:00
thorpej
0723d57281 Clean up the locking mess in uvm_map_pageable() a little... Most importantly,
don't unlock a kernel map (!!!) and then relock it later; a recursive lock,
as it used in the user map case, is fine.  Also, don't change map entries
while only holding a read lock on the map.  Instead, if we fail to wire
a page, clear recursive locking, and upgrade back to a write lock before
dropping the wiring count on the remaining map entries.
1999-06-02 21:23:08 +00:00
mrg
2332079d3f unlock the map for unknown arguments to uvm_map_advise. from Soren S. Jorvang in PR kern/7681 1999-05-31 23:36:23 +00:00
thorpej
fb36fe649a A little spring cleaning in the unwire case of uvm_map_pageable(). 1999-05-28 22:54:12 +00:00
thorpej
8d8badbd8f Make uvm_fault_unwire() take a vm_map_t, rather than a pmap_t, for
consistency.  Use this opportunity for checking for intrsafe map use
in this routine (which is illegal).
1999-05-28 20:49:51 +00:00
thorpej
108b13d5a9 Make "intrsafe" maps locked only by exclusive spin locks, never sleep
locks (and thus, never shared locks).  Move the "set/clear recursive"
functions to uvm_map.c, which is the only placed they're used (and
they should go away anyhow).  Delete some unused cruft.
1999-05-28 20:31:42 +00:00
thorpej
5920638afe Change the main comment block to indicate why PMAP_NEW (specifically,
pmap_kenter*()) is not required for {O,A}->K page loans.
1999-05-27 21:50:03 +00:00
thorpej
80de1e9903 Upon further investigation, in uvm_map_pageable(), entry->protection is the
right access_type to pass to uvm_fault_wire().  This way, if the entry has
VM_PROT_WRITE, and the entry is marked COW, the copy will happen immediately
in uvm_fault(), as if the access were performed.
1999-05-26 23:53:48 +00:00
thorpej
6b655611b1 Wired kernel mappings are wired; pass VM_PROT_READ|VM_PROT_WRITE for
access_type to pmap_enter() to ensure that when these mappings are accessed,
possibly in interrupt context, that they won't cause mod/ref emulation
page faults.
1999-05-26 19:27:49 +00:00
thorpej
2580d306ab Change the vm_map's "entries_pageable" member to a r/o flags member, which
has PAGEABLE and INTRSAFE flags.  PAGEABLE now really means "pageable",
not "allocate vm_map_entry's from non-static pool", so update all map
creations to reflect that.  INTRSAFE maps are maps that are used in
interrupt context (e.g. kmem_map, mb_map), and thus use the static
map entry pool (XXX as does kernel_map, for now).  This will eventually
change now these maps are locked, as well.
1999-05-26 19:16:28 +00:00
thorpej
00a1f75cf6 In uvm_pagermapin(), pass VM_PROT_READ|VM_PROT_WRITE as access_type, to
ensure we don't take mod/ref emulation faults in an interrupt context
(e.g. during the i/o operation).  This is safe because:
	- For a pageout operation, the page is already known to be
	  modified, and the pagedaemon will pmap_clear_modify() after
	  the pageout has completed.
	- For a pagein operation, pagers must already pmap_clear_modify()
	  after the pagein operation is complete, because the i/o may have
	  been done with e.g. programmed i/o.
XXX It would be nice to know the i/o direction so that we can call
XXX pmap_enter() with only the protection and access_type necessary.
1999-05-26 06:42:57 +00:00
thorpej
b2e9c635ec Pass an access_type to uvm_vslock(). 1999-05-26 01:05:24 +00:00
thorpej
7b4db806b6 In uvm_map_pageable(), pass VM_PROT_NONE as access type to uvm_fault_wire()
for now.  XXX This needs to be reexamined.
1999-05-26 00:36:53 +00:00
thorpej
9d0ea0969e - uvm_fork()/uvm_swapin(): pass VM_PROT_READ|VM_PROT_WRITE access_type
to uvm_fault_wire(), to guarantee that the kernel stacks will not
  cause even a mod/ref emulation fault.
- uvm_vslock(): pass VM_PROT_NONE until this function is updated.
1999-05-26 00:33:52 +00:00
thorpej
195c1a2741 Pass an access_type to uvm_fault_wire(), which it forwards on to
uvm_fault().
1999-05-26 00:32:42 +00:00
thorpej
0ff8d3ac1a Define a new kernel object type, "intrsafe", which are used for objects
which can be used in an interrupt context.  Use pmap_kenter*() and
pmap_kremove() only for mappings owned by these objects.

Fixes some locking protocol issues related to MP support, and eliminates
all of the pmap_enter vs. pmap_kremove inconsistencies.
1999-05-25 20:30:08 +00:00
thorpej
789c9e7c48 Add a comment explaining why using pmap_kenter_pa() is safe here. 1999-05-25 01:34:13 +00:00
thorpej
85f8d1343c Macro'ize the test for "object is a kernel object". 1999-05-25 00:09:00 +00:00
thorpej
9b731fd45c Remove a comment in uvm_pager_dropcluster() about PMAP_NEW and mod/ref
attributes for the page; it no longer applies, since we don't use
pmap_kenter_pgs() anymore.
1999-05-24 23:36:23 +00:00
thorpej
7becac6b9a Don't use pmap_kenter_pgs() for entering pager_map mappings. The pages
are still owned by the object which is paging, and so the test for a kernel
object in uvm_unmap_remove() will cause pmap_remove() to be used instead
of pmap_kremove().

This was a MAJOR source of pmap_remove() vs pmap_kremove() inconsistency
(which caused the busted kernel pmap statistics, and a cause of much
locking hair on MP systems).
1999-05-24 23:30:44 +00:00
thorpej
6eb9ee7cd8 - Change uvm_{lock,unlock}_fpageq() to return/take the previous interrupt
level directly, instead of making the caller wrap the calls in
  splimp()/splx().
- Add a comment documenting that interrupts that cause memory allocation
  must be blocked while the free page queue is locked.

Since interrupts must be blocked while this lock is asserted, tying them
together like this helps to prevent mistakes.
1999-05-24 19:10:57 +00:00
mrg
f1f95c374b implement madvice() for MADV_{NORMAL,RANDOM,SEQUENTIAL}, others are not yet done. 1999-05-23 06:27:13 +00:00
thorpej
f311a1c308 Make a slight modification of pmap_growkernel() -- it now returns the
end of the mappable kernel virtual address space.  Previously, it would
get called more often than necessary, because the caller only new what
was requested.

Also, export uvm_maxkaddr so that uvm_pageboot_alloc() can grow the
kernel pmap if necessary, as well.  Note that pmap_growkernel() must
now be able to handle being called before pmap_init().
1999-05-20 23:03:23 +00:00
thorpej
1d197b8e7b If we run out of virtual space in uvm_pageboot_alloc(), fail gracefully
rather than unpredictably.
1999-05-20 20:07:55 +00:00
chs
a5d3e8dae9 when wiring swap-backed pages, clear the PG_CLEAN flag before
releasing any swap resources.  if we don't do this, we can
end up with a clean, swap-backed page, which is illegal.
tracked down by Bill Sommerfeld, fixes PR 7578.
1999-05-19 06:14:15 +00:00
thorpej
c10a926030 Allow the caller to specify a stack for the child process. If NULL,
the child inherits the stack pointer from the parent (traditional
behavior).  Like the signal stack, the stack area is secified as
a low address and a size; machine-dependent code accounts for stack
direction.

This is required for clone(2).
1999-05-13 21:58:32 +00:00
thorpej
f5108f64e7 Add an optional pmap hook, pmap_fork(), to be called at the end of
uvmspace_fork().

pmap_fork() is used to "fork a pmap", that is copy data from one pmap
to the other that is NOT related to actual mappings in the pmap, but is
otherwise logically coupled to the address space.
1999-05-12 19:11:23 +00:00
mrg
fc7c17462c fix some formatting foo. 1999-05-03 09:08:28 +00:00
mrg
e378d35ade remove now-wrong comments. formatting nits. 1999-05-03 08:57:42 +00:00
mrg
c2f7cb3c4e remove now-wrong comment. formatting nit. 1999-05-03 08:53:24 +00:00
thorpej
2835fc6e46 Pull signal actions out of struct user, make them a separate proc
substructure, and allow them to be shared.

Required for clone(2).
1999-04-30 21:23:49 +00:00
chs
69ead14e9b in uvm_map_extract(), handle the case where the map entry being extracted
is large enough to cause the end address of the new entry to overflow.
1999-04-19 14:43:46 +00:00
chs
f455dd6596 add a `flags' argument to uvm_pagealloc_strat().
define a flag UVM_PGA_USERESERVE to allow non-kernel object
allocations to use pages from the reserve.
use the new flag for allocations in pmap modules.
1999-04-11 04:04:04 +00:00
drochner
3d6e675ba8 sanity: use ';' to separate statements 1999-04-08 10:26:21 +00:00
chs
039c17eca9 remove some old #if 0'd-out debugging code. 1999-03-30 16:07:47 +00:00
mycroft
99b341de15 Adjust a comparison so that the pagedaemon doesn't get stuck ping-ponging with
a process trying to allocate memory.
1999-03-30 10:12:01 +00:00
mycroft
671c65c6da Duuuh. Back and front pages should have an access_type of 0, since we don't
know they're going to be used.  What was I thinking??
1999-03-29 05:43:31 +00:00
mycroft
0ce76ca08b Reduce the access_type for copy-on-write pages in the front and back regions. 1999-03-28 21:48:50 +00:00
mycroft
8ed77cabd0 Fix a case I missed in the previous. 1999-03-28 21:01:25 +00:00
mycroft
4831b815f5 Only turn off VM_PROT_WRITE for COW pages; not VM_PROT_EXECUTE. 1999-03-28 19:53:49 +00:00
mycroft
31a2536cd0 Add a new `access type' argument to pmap_enter(). This indicates what type of
memory access a mapping was caused by.  This is passed through from uvm_fault()
and udv_fault(), and in most other cases is 0.
The pmap module may use this to preset R/M information.  On MMUs which require
R/M emulation, the implementation may preset the bits and avoid taking another
fault.  On MMUs which keep R/M information in hardware, the implementation may
preset its cached bits to speed up the next call to pmap_is_modified() or
pmap_is_referenced().
1999-03-26 21:58:39 +00:00
chs
d97d75d81b add uvmexp.swpgonly and use it to detect out-of-swap conditions. 1999-03-26 17:34:15 +00:00