Commit Graph

252 Commits

Author SHA1 Message Date
thorpej
5de7bac9b1 Print the maps flags in "show map" from DDB. 1999-06-07 16:31:42 +00:00
thorpej
2c3dc83a64 Keep interrupt-safe maps on an additional queue. In uvm_fault(), if we're
looking up a kernel address, check to see if the address is on this
"interrupt-safe" list.  If so, return failure immediately.  This prevents
a locking screw if a page fault is taken on an interrupt-safe map in or
out of interrupt context.
1999-06-04 23:38:41 +00:00
thorpej
8c59c67288 Just say no to interrupt-safe maps. 1999-06-03 00:05:45 +00:00
thorpej
acf81da21e A page fault on a non-pageable map is always fatal. 1999-06-02 23:26:21 +00:00
thorpej
779ecdd773 Simplify the last even more; We downgraded to a shared (read) lock, so
setting recursive has no effect!  The kernel lock manager doesn't allow
an exclusive recursion into a shared lock.  This situation must simply
be avoided.  The only place where this might be a problem is the (ab)use
of uvm_map_pageable() in the Utah-derived pmaps for m68k (they should
either toss the iffy scheme they use completely, or use something like
uvm_fault_wire()).

In addition, once we have looped over uvm_fault_wire(), only upgrade to
an exclusive (write) lock if we need to modify the map again (i.e.
wiring a page failed).
1999-06-02 22:40:51 +00:00
thorpej
0723d57281 Clean up the locking mess in uvm_map_pageable() a little... Most importantly,
don't unlock a kernel map (!!!) and then relock it later; a recursive lock,
as it used in the user map case, is fine.  Also, don't change map entries
while only holding a read lock on the map.  Instead, if we fail to wire
a page, clear recursive locking, and upgrade back to a write lock before
dropping the wiring count on the remaining map entries.
1999-06-02 21:23:08 +00:00
mrg
2332079d3f unlock the map for unknown arguments to uvm_map_advise. from Soren S. Jorvang in PR kern/7681 1999-05-31 23:36:23 +00:00
thorpej
fb36fe649a A little spring cleaning in the unwire case of uvm_map_pageable(). 1999-05-28 22:54:12 +00:00
thorpej
8d8badbd8f Make uvm_fault_unwire() take a vm_map_t, rather than a pmap_t, for
consistency.  Use this opportunity for checking for intrsafe map use
in this routine (which is illegal).
1999-05-28 20:49:51 +00:00
thorpej
108b13d5a9 Make "intrsafe" maps locked only by exclusive spin locks, never sleep
locks (and thus, never shared locks).  Move the "set/clear recursive"
functions to uvm_map.c, which is the only placed they're used (and
they should go away anyhow).  Delete some unused cruft.
1999-05-28 20:31:42 +00:00
thorpej
5920638afe Change the main comment block to indicate why PMAP_NEW (specifically,
pmap_kenter*()) is not required for {O,A}->K page loans.
1999-05-27 21:50:03 +00:00
thorpej
80de1e9903 Upon further investigation, in uvm_map_pageable(), entry->protection is the
right access_type to pass to uvm_fault_wire().  This way, if the entry has
VM_PROT_WRITE, and the entry is marked COW, the copy will happen immediately
in uvm_fault(), as if the access were performed.
1999-05-26 23:53:48 +00:00
thorpej
6b655611b1 Wired kernel mappings are wired; pass VM_PROT_READ|VM_PROT_WRITE for
access_type to pmap_enter() to ensure that when these mappings are accessed,
possibly in interrupt context, that they won't cause mod/ref emulation
page faults.
1999-05-26 19:27:49 +00:00
thorpej
2580d306ab Change the vm_map's "entries_pageable" member to a r/o flags member, which
has PAGEABLE and INTRSAFE flags.  PAGEABLE now really means "pageable",
not "allocate vm_map_entry's from non-static pool", so update all map
creations to reflect that.  INTRSAFE maps are maps that are used in
interrupt context (e.g. kmem_map, mb_map), and thus use the static
map entry pool (XXX as does kernel_map, for now).  This will eventually
change now these maps are locked, as well.
1999-05-26 19:16:28 +00:00
thorpej
00a1f75cf6 In uvm_pagermapin(), pass VM_PROT_READ|VM_PROT_WRITE as access_type, to
ensure we don't take mod/ref emulation faults in an interrupt context
(e.g. during the i/o operation).  This is safe because:
	- For a pageout operation, the page is already known to be
	  modified, and the pagedaemon will pmap_clear_modify() after
	  the pageout has completed.
	- For a pagein operation, pagers must already pmap_clear_modify()
	  after the pagein operation is complete, because the i/o may have
	  been done with e.g. programmed i/o.
XXX It would be nice to know the i/o direction so that we can call
XXX pmap_enter() with only the protection and access_type necessary.
1999-05-26 06:42:57 +00:00
thorpej
b2e9c635ec Pass an access_type to uvm_vslock(). 1999-05-26 01:05:24 +00:00
thorpej
7b4db806b6 In uvm_map_pageable(), pass VM_PROT_NONE as access type to uvm_fault_wire()
for now.  XXX This needs to be reexamined.
1999-05-26 00:36:53 +00:00
thorpej
9d0ea0969e - uvm_fork()/uvm_swapin(): pass VM_PROT_READ|VM_PROT_WRITE access_type
to uvm_fault_wire(), to guarantee that the kernel stacks will not
  cause even a mod/ref emulation fault.
- uvm_vslock(): pass VM_PROT_NONE until this function is updated.
1999-05-26 00:33:52 +00:00
thorpej
195c1a2741 Pass an access_type to uvm_fault_wire(), which it forwards on to
uvm_fault().
1999-05-26 00:32:42 +00:00
thorpej
0ff8d3ac1a Define a new kernel object type, "intrsafe", which are used for objects
which can be used in an interrupt context.  Use pmap_kenter*() and
pmap_kremove() only for mappings owned by these objects.

Fixes some locking protocol issues related to MP support, and eliminates
all of the pmap_enter vs. pmap_kremove inconsistencies.
1999-05-25 20:30:08 +00:00
thorpej
789c9e7c48 Add a comment explaining why using pmap_kenter_pa() is safe here. 1999-05-25 01:34:13 +00:00
thorpej
85f8d1343c Macro'ize the test for "object is a kernel object". 1999-05-25 00:09:00 +00:00
thorpej
9b731fd45c Remove a comment in uvm_pager_dropcluster() about PMAP_NEW and mod/ref
attributes for the page; it no longer applies, since we don't use
pmap_kenter_pgs() anymore.
1999-05-24 23:36:23 +00:00
thorpej
7becac6b9a Don't use pmap_kenter_pgs() for entering pager_map mappings. The pages
are still owned by the object which is paging, and so the test for a kernel
object in uvm_unmap_remove() will cause pmap_remove() to be used instead
of pmap_kremove().

This was a MAJOR source of pmap_remove() vs pmap_kremove() inconsistency
(which caused the busted kernel pmap statistics, and a cause of much
locking hair on MP systems).
1999-05-24 23:30:44 +00:00
thorpej
6eb9ee7cd8 - Change uvm_{lock,unlock}_fpageq() to return/take the previous interrupt
level directly, instead of making the caller wrap the calls in
  splimp()/splx().
- Add a comment documenting that interrupts that cause memory allocation
  must be blocked while the free page queue is locked.

Since interrupts must be blocked while this lock is asserted, tying them
together like this helps to prevent mistakes.
1999-05-24 19:10:57 +00:00
mrg
f1f95c374b implement madvice() for MADV_{NORMAL,RANDOM,SEQUENTIAL}, others are not yet done. 1999-05-23 06:27:13 +00:00
thorpej
f311a1c308 Make a slight modification of pmap_growkernel() -- it now returns the
end of the mappable kernel virtual address space.  Previously, it would
get called more often than necessary, because the caller only new what
was requested.

Also, export uvm_maxkaddr so that uvm_pageboot_alloc() can grow the
kernel pmap if necessary, as well.  Note that pmap_growkernel() must
now be able to handle being called before pmap_init().
1999-05-20 23:03:23 +00:00
thorpej
1d197b8e7b If we run out of virtual space in uvm_pageboot_alloc(), fail gracefully
rather than unpredictably.
1999-05-20 20:07:55 +00:00
chs
a5d3e8dae9 when wiring swap-backed pages, clear the PG_CLEAN flag before
releasing any swap resources.  if we don't do this, we can
end up with a clean, swap-backed page, which is illegal.
tracked down by Bill Sommerfeld, fixes PR 7578.
1999-05-19 06:14:15 +00:00
thorpej
c10a926030 Allow the caller to specify a stack for the child process. If NULL,
the child inherits the stack pointer from the parent (traditional
behavior).  Like the signal stack, the stack area is secified as
a low address and a size; machine-dependent code accounts for stack
direction.

This is required for clone(2).
1999-05-13 21:58:32 +00:00
thorpej
f5108f64e7 Add an optional pmap hook, pmap_fork(), to be called at the end of
uvmspace_fork().

pmap_fork() is used to "fork a pmap", that is copy data from one pmap
to the other that is NOT related to actual mappings in the pmap, but is
otherwise logically coupled to the address space.
1999-05-12 19:11:23 +00:00
mrg
fc7c17462c fix some formatting foo. 1999-05-03 09:08:28 +00:00
mrg
e378d35ade remove now-wrong comments. formatting nits. 1999-05-03 08:57:42 +00:00
mrg
c2f7cb3c4e remove now-wrong comment. formatting nit. 1999-05-03 08:53:24 +00:00
thorpej
2835fc6e46 Pull signal actions out of struct user, make them a separate proc
substructure, and allow them to be shared.

Required for clone(2).
1999-04-30 21:23:49 +00:00
chs
69ead14e9b in uvm_map_extract(), handle the case where the map entry being extracted
is large enough to cause the end address of the new entry to overflow.
1999-04-19 14:43:46 +00:00
chs
f455dd6596 add a `flags' argument to uvm_pagealloc_strat().
define a flag UVM_PGA_USERESERVE to allow non-kernel object
allocations to use pages from the reserve.
use the new flag for allocations in pmap modules.
1999-04-11 04:04:04 +00:00
drochner
3d6e675ba8 sanity: use ';' to separate statements 1999-04-08 10:26:21 +00:00
chs
039c17eca9 remove some old #if 0'd-out debugging code. 1999-03-30 16:07:47 +00:00
mycroft
99b341de15 Adjust a comparison so that the pagedaemon doesn't get stuck ping-ponging with
a process trying to allocate memory.
1999-03-30 10:12:01 +00:00
mycroft
671c65c6da Duuuh. Back and front pages should have an access_type of 0, since we don't
know they're going to be used.  What was I thinking??
1999-03-29 05:43:31 +00:00
mycroft
0ce76ca08b Reduce the access_type for copy-on-write pages in the front and back regions. 1999-03-28 21:48:50 +00:00
mycroft
8ed77cabd0 Fix a case I missed in the previous. 1999-03-28 21:01:25 +00:00
mycroft
4831b815f5 Only turn off VM_PROT_WRITE for COW pages; not VM_PROT_EXECUTE. 1999-03-28 19:53:49 +00:00
mycroft
31a2536cd0 Add a new `access type' argument to pmap_enter(). This indicates what type of
memory access a mapping was caused by.  This is passed through from uvm_fault()
and udv_fault(), and in most other cases is 0.
The pmap module may use this to preset R/M information.  On MMUs which require
R/M emulation, the implementation may preset the bits and avoid taking another
fault.  On MMUs which keep R/M information in hardware, the implementation may
preset its cached bits to speed up the next call to pmap_is_modified() or
pmap_is_referenced().
1999-03-26 21:58:39 +00:00
chs
d97d75d81b add uvmexp.swpgonly and use it to detect out-of-swap conditions. 1999-03-26 17:34:15 +00:00
chs
92045bbba9 add uvmexp.swpgonly and use it to detect out-of-swap conditions.
numerous pagedaemon improvements were needed to make this useful:
 - don't bother waking up procs waiting for memory if there's none to be had.
 - start 4 times as many pageouts as we need free pages.
   this should reduce latency in low-memory situations.
 - in inactive scanning, if we find dirty swap-backed pages when swap space
   is full of non-resident pages, reactivate some number of these to flush
   less active pages to the inactive queue so we can consider paging them out.
   this replaces the previous scheme of inactivating pages beyond the
   inactive target when we failed to free anything during inactive scanning.
 - during both active and inactive scanning, free any swap resources from
   dirty swap-backed pages if swap space is full.  this allows other pages
   be paged out into that swap space.
1999-03-26 17:33:30 +00:00
mrg
a0139bc39d remove now >1 year old pre-release message. 1999-03-25 18:48:49 +00:00
sommerfe
f1a508e354 Prevent deadlock cited in PR4629 from crashing the system. (copyout
and system call now just return EFAULT).  A complete fix will
presumably have to wait for UBC and/or for vnode locking protocols to
be revamped to allow use of shared locks.
1999-03-25 00:20:35 +00:00
cgd
9639d2bb98 modify udv_attach() and its caller (uvm_mmap()) so that it's passed the
offset and size of the requested region to be mapped, so that the
udv_attach() can use the device d_mmap() entry to check mappability
of the requested region.
1999-03-24 03:52:41 +00:00
cgd
37c88c58da after discussion with chuck, nuke pgo_attach from uvm_pagerops 1999-03-24 03:45:27 +00:00
chs
a65cf876d6 VHOLD() must be called at splbio() since HOLDRELE() is called
from the iodone handler.
1999-03-18 01:45:29 +00:00
chs
e2d0bfbb09 remove a debugging printf. 1999-03-15 07:55:19 +00:00
kleink
b0fe22c29d Have unimplemented/unsupported system calls (madvise(), mincore(), sbrk(),
sstk()) fail with ENOSYS.
1999-03-09 12:18:22 +00:00
chs
0c38ab98fa fix printf arg types. 1999-03-04 06:48:54 +00:00
chs
381e042ff6 fix printf format types. 1999-03-04 06:48:15 +00:00
mrg
3743c9e91d handle SWAP_DUMPDEV 1999-02-23 15:58:28 +00:00
mrg
08fd4f3f47 80 cols. 1999-01-31 09:27:18 +00:00
bouyer
b87a535a9f A small typo fix, + enclose "used_vnode_size = %qu" debug printf inside
#ifdef DEBUG/#endif
1999-01-29 12:56:17 +00:00
chuck
486cfd0e5a comment cleanup, shift around the inline stuff a bit,
rename VM_AMAP_PPREF (to UVM_AMAP_PPREF).
1999-01-28 14:46:27 +00:00
chuck
44f5fc2839 cleanup/reorg:
- break anon related functions out of uvm_amap.c and put them in their own
  file (uvm_anon.c).  includes break up uvm_anon_init into an amap and an
  an anon init function
- ensure that only functions within the amap module access amap structure
  fields (add macros to amap api as needed)
1999-01-24 23:53:14 +00:00
chs
0c2374e586 fix a precedence problem in uvm_mk_pcluster() which prevented
clustering of vnode pageouts.  this probably makes no difference
since most apps don't write via the pagecache anyway... yet.
1999-01-22 08:00:35 +00:00
marc
e21b4568e2 When a reference is made to a hole in a swap file, panic. The optimal
thing would be to allocate the block, but I don't know how to do this.
The panic is preferable to the random memory corruption the old code
was causing.
1998-12-26 06:25:59 +00:00
chuck
cc2f45083b update outdated an_swslot comments 1998-11-20 19:37:06 +00:00
mrg
d39ed4e552 check the return value of d_mmap before pmap_phys_address() gets hold of it. 1998-11-19 05:23:26 +00:00
chuck
281eb8b87a remove bogus permission check in uvm_map_clean(). fixes mmap/msync
problem discussed/reported by jonathan and Andreas Wrede <andreas@planix.com>.
1998-11-15 04:38:19 +00:00
mycroft
7c037b7009 Clear B_NOCACHE when we're done with the buffer -- although this is probably
pointless.
1998-11-08 19:41:49 +00:00
mycroft
6422baa1c0 Set the B_NOCACHE bit so that NFSv3 will not try to do async writes. 1998-11-08 19:37:12 +00:00
mrg
7c0d69c3ab minor KNF nits 1998-11-07 05:50:19 +00:00
chs
28411139b3 be consistent with locking of amaps and anons when freeing them. 1998-11-04 07:07:22 +00:00
chs
e4c4ea06b4 remove outdated comment. 1998-11-04 07:06:05 +00:00
chs
23ed4b5656 we must unlock a vp's object's lock before calling vrele(). 1998-11-04 06:21:40 +00:00
mrg
bba8470ccb KNF a missing bit. remove register. 1998-10-24 13:32:34 +00:00
tron
c71ccab136 Defopt SYSVMSG, SYSVSEM and SYSVSHM. 1998-10-19 22:21:19 +00:00
chs
549cd579e5 shift by PAGE_SHIFT instead of multiplying or dividing by PAGE_SIZE. 1998-10-18 23:49:59 +00:00
tv
000978aaca Check for gcc the Right way when quashing -Wuninitialized goop. 1998-10-16 19:34:57 +00:00
chuck
025ae6bd64 remove unused share map code from UVM:
- update calls to uvm_unmap_remove/uvm_unmap (mainonly boolean arg
	has been removed)
 - replace UVM_ET_ISMAP checks with UVM_ET_ISSUBMAP checks
1998-10-11 23:18:20 +00:00
chuck
03939069dc remove unused share map code from UVM:
- update uvm_faultinfo's rvaddr to orig_rvaddr to match changes from
	uvm_fault.h
1998-10-11 23:16:20 +00:00
chuck
2d4c15ebc9 remove unused share map code from UVM:
- replace map checks with submap checks
 - get rid of unused 'mainonly' arg in uvm_unmap/uvm_unmap_remove, simplify
	code.   update all calls to reflect this.
 - don't worry about unmapping or changing the protection of shared share
	map mappings (is_main_map no longer used).
 - remove unused uvm_map_sharemapcopy() function from fork code.
1998-10-11 23:14:47 +00:00
chuck
1b59a238c4 remove unused share map code from UVM:
- simplify uvm_faultinfo in uvm_fault.h (parent map tracking no longer needed)
 - adjust locking and lookup functions in uvm_fault_i.h to reflect the above
 - replace ufi.rvaddr with ufi.orig_rvaddr in uvm_fault.c since rvaddr is
	no longer needed.
 - no need to worry about share map translations in uvm_fault().  simplify.
1998-10-11 23:07:42 +00:00
chuck
8ffef382dd remove unused share map code from UVM:
- udv_fault() no longer has to worry about share map address translations
	on device faults.  simplify code.
1998-10-11 23:02:31 +00:00
chuck
a4d3b16d22 remove unused share map code from UVM:
dump UVM_ET_MAP/UVM_ET_ISMAP.   if you need to detect a submap use
  UVM_ET_SUBMAP/UVM_ET_ISSUBMAP.
1998-10-11 22:59:53 +00:00
chuck
495b6aafdc fix ppref botch. establish ppref at split time before we add the duplicate
reference.
1998-10-08 19:47:50 +00:00
mrg
fdc5499c5f back out previous. 1998-09-30 15:44:10 +00:00
tv
8219f068e2 Declare silent success on madvise(). As an advisory call, it is harmless
to pretend success even though it's not supported, and some emulations
rely on its success.
1998-09-30 12:07:51 +00:00
thorpej
feb1d22dcc NCPU > 1 -> MULTIPROCESSOR 1998-09-24 23:00:43 +00:00
thorpej
1e2aeb4a35 Add a comment documenting the last change. 1998-09-18 19:28:22 +00:00
thorpej
5dd4b45577 Don't use the nointr pool page allocator for the uao_swhash_elt pool. We
need to ensure that these come from a non-pageable kernel map, otherwise
we can run into a deadlock condition (as noticed by Chuck Silvers).
1998-09-18 19:27:20 +00:00
thorpej
28904fca48 Implement uvm_exit(), which frees VM resources when a process finishes
exiting.
1998-09-08 23:44:21 +00:00
pk
d2d3f83fd7 Panic instead failing the syscall on an impossible condition (from Robert Elz).
Plug possible memory leakage with the recently added device path stuff.
1998-09-06 23:09:39 +00:00
thorpej
38e7a08bed Allocate vm_anon arrays from kernel_map, not via MALLOC(). Helps relieve
much of UVM's kmem_map usage.
1998-08-31 02:43:14 +00:00
thorpej
d865961d77 Back out previous; I should have instrumented the benefit of this one
first.
1998-08-31 01:54:14 +00:00
thorpej
7338d4e403 Use the pool allocator and the "nointr" pool page allocator for vm_map's. 1998-08-31 01:50:08 +00:00
thorpej
be8d09cda3 Use the pool allocator and the "nointr" pool page allocator for dynamically
allocated vm_map_entry's.
1998-08-31 01:10:15 +00:00
thorpej
99626224a7 Use the pool allocator and the "nointr" pool page allocator for vmspace
structures.
1998-08-31 00:20:26 +00:00
thorpej
694e9583aa Make sure the aobj_pager gets initialized! 1998-08-31 00:03:02 +00:00
thorpej
5a4981d9b8 Use the pool allocator w/ the "nointr" pool page allocator for uvm_aobj
and uao_swhash_elt structures.  Also, fix a bug in uao_set_swlot() where
if setting the swslot to 0 (freeing swap resources), and no swslot was
currently allocated, a new entry would be allocated anyhow (revealed during
pool'ification).
1998-08-31 00:01:59 +00:00
enami
71ba20edbb Define `len' as size_t rather than int so that correct type is passed
as fourth argument of copystr.
1998-08-30 03:08:43 +00:00
mrg
edda33e00c move <vm/vm_swap.h> to <sys/swap.h>. <vm/vm_swap.h> still works for now (goes away later) 1998-08-29 17:01:14 +00:00
mrg
b5f69ff667 add a `char se_path[PATH_MAX]' member to struct swapent, that
the pathname of the swap device is saved into.  add a char *swd_path
member to struct swapdev, that contains a copy of the pathname
(using malloc(9)).  rename swapctl(2)'s SWAP_STATS to SWAP_OSTATS,
and add a new SWAP_STATS command (number).  make swapctl(SWAP_STATS,
...) [new version] copy the path out.  if COMPAT_13, also include
support for SWAP_OSTATS.  also fix a minor bug in swapctl(2).

the point of this is that swapfiles are now shown in `swapctl -l'.
1998-08-29 13:27:50 +00:00