Commit Graph

351 Commits

Author SHA1 Message Date
thorpej
2c3dc83a64 Keep interrupt-safe maps on an additional queue. In uvm_fault(), if we're
looking up a kernel address, check to see if the address is on this
"interrupt-safe" list.  If so, return failure immediately.  This prevents
a locking screw if a page fault is taken on an interrupt-safe map in or
out of interrupt context.
1999-06-04 23:38:41 +00:00
thorpej
8c59c67288 Just say no to interrupt-safe maps. 1999-06-03 00:05:45 +00:00
thorpej
acf81da21e A page fault on a non-pageable map is always fatal. 1999-06-02 23:26:21 +00:00
thorpej
779ecdd773 Simplify the last even more; We downgraded to a shared (read) lock, so
setting recursive has no effect!  The kernel lock manager doesn't allow
an exclusive recursion into a shared lock.  This situation must simply
be avoided.  The only place where this might be a problem is the (ab)use
of uvm_map_pageable() in the Utah-derived pmaps for m68k (they should
either toss the iffy scheme they use completely, or use something like
uvm_fault_wire()).

In addition, once we have looped over uvm_fault_wire(), only upgrade to
an exclusive (write) lock if we need to modify the map again (i.e.
wiring a page failed).
1999-06-02 22:40:51 +00:00
thorpej
0723d57281 Clean up the locking mess in uvm_map_pageable() a little... Most importantly,
don't unlock a kernel map (!!!) and then relock it later; a recursive lock,
as it used in the user map case, is fine.  Also, don't change map entries
while only holding a read lock on the map.  Instead, if we fail to wire
a page, clear recursive locking, and upgrade back to a write lock before
dropping the wiring count on the remaining map entries.
1999-06-02 21:23:08 +00:00
mrg
2332079d3f unlock the map for unknown arguments to uvm_map_advise. from Soren S. Jorvang in PR kern/7681 1999-05-31 23:36:23 +00:00
thorpej
fb36fe649a A little spring cleaning in the unwire case of uvm_map_pageable(). 1999-05-28 22:54:12 +00:00
thorpej
8d8badbd8f Make uvm_fault_unwire() take a vm_map_t, rather than a pmap_t, for
consistency.  Use this opportunity for checking for intrsafe map use
in this routine (which is illegal).
1999-05-28 20:49:51 +00:00
thorpej
108b13d5a9 Make "intrsafe" maps locked only by exclusive spin locks, never sleep
locks (and thus, never shared locks).  Move the "set/clear recursive"
functions to uvm_map.c, which is the only placed they're used (and
they should go away anyhow).  Delete some unused cruft.
1999-05-28 20:31:42 +00:00
thorpej
5920638afe Change the main comment block to indicate why PMAP_NEW (specifically,
pmap_kenter*()) is not required for {O,A}->K page loans.
1999-05-27 21:50:03 +00:00
thorpej
80de1e9903 Upon further investigation, in uvm_map_pageable(), entry->protection is the
right access_type to pass to uvm_fault_wire().  This way, if the entry has
VM_PROT_WRITE, and the entry is marked COW, the copy will happen immediately
in uvm_fault(), as if the access were performed.
1999-05-26 23:53:48 +00:00
thorpej
6b655611b1 Wired kernel mappings are wired; pass VM_PROT_READ|VM_PROT_WRITE for
access_type to pmap_enter() to ensure that when these mappings are accessed,
possibly in interrupt context, that they won't cause mod/ref emulation
page faults.
1999-05-26 19:27:49 +00:00
thorpej
2580d306ab Change the vm_map's "entries_pageable" member to a r/o flags member, which
has PAGEABLE and INTRSAFE flags.  PAGEABLE now really means "pageable",
not "allocate vm_map_entry's from non-static pool", so update all map
creations to reflect that.  INTRSAFE maps are maps that are used in
interrupt context (e.g. kmem_map, mb_map), and thus use the static
map entry pool (XXX as does kernel_map, for now).  This will eventually
change now these maps are locked, as well.
1999-05-26 19:16:28 +00:00
thorpej
00a1f75cf6 In uvm_pagermapin(), pass VM_PROT_READ|VM_PROT_WRITE as access_type, to
ensure we don't take mod/ref emulation faults in an interrupt context
(e.g. during the i/o operation).  This is safe because:
	- For a pageout operation, the page is already known to be
	  modified, and the pagedaemon will pmap_clear_modify() after
	  the pageout has completed.
	- For a pagein operation, pagers must already pmap_clear_modify()
	  after the pagein operation is complete, because the i/o may have
	  been done with e.g. programmed i/o.
XXX It would be nice to know the i/o direction so that we can call
XXX pmap_enter() with only the protection and access_type necessary.
1999-05-26 06:42:57 +00:00
thorpej
b2e9c635ec Pass an access_type to uvm_vslock(). 1999-05-26 01:05:24 +00:00
thorpej
7b4db806b6 In uvm_map_pageable(), pass VM_PROT_NONE as access type to uvm_fault_wire()
for now.  XXX This needs to be reexamined.
1999-05-26 00:36:53 +00:00
thorpej
9d0ea0969e - uvm_fork()/uvm_swapin(): pass VM_PROT_READ|VM_PROT_WRITE access_type
to uvm_fault_wire(), to guarantee that the kernel stacks will not
  cause even a mod/ref emulation fault.
- uvm_vslock(): pass VM_PROT_NONE until this function is updated.
1999-05-26 00:33:52 +00:00
thorpej
195c1a2741 Pass an access_type to uvm_fault_wire(), which it forwards on to
uvm_fault().
1999-05-26 00:32:42 +00:00
thorpej
0ff8d3ac1a Define a new kernel object type, "intrsafe", which are used for objects
which can be used in an interrupt context.  Use pmap_kenter*() and
pmap_kremove() only for mappings owned by these objects.

Fixes some locking protocol issues related to MP support, and eliminates
all of the pmap_enter vs. pmap_kremove inconsistencies.
1999-05-25 20:30:08 +00:00
thorpej
789c9e7c48 Add a comment explaining why using pmap_kenter_pa() is safe here. 1999-05-25 01:34:13 +00:00
thorpej
85f8d1343c Macro'ize the test for "object is a kernel object". 1999-05-25 00:09:00 +00:00
thorpej
9b731fd45c Remove a comment in uvm_pager_dropcluster() about PMAP_NEW and mod/ref
attributes for the page; it no longer applies, since we don't use
pmap_kenter_pgs() anymore.
1999-05-24 23:36:23 +00:00
thorpej
7becac6b9a Don't use pmap_kenter_pgs() for entering pager_map mappings. The pages
are still owned by the object which is paging, and so the test for a kernel
object in uvm_unmap_remove() will cause pmap_remove() to be used instead
of pmap_kremove().

This was a MAJOR source of pmap_remove() vs pmap_kremove() inconsistency
(which caused the busted kernel pmap statistics, and a cause of much
locking hair on MP systems).
1999-05-24 23:30:44 +00:00
thorpej
6eb9ee7cd8 - Change uvm_{lock,unlock}_fpageq() to return/take the previous interrupt
level directly, instead of making the caller wrap the calls in
  splimp()/splx().
- Add a comment documenting that interrupts that cause memory allocation
  must be blocked while the free page queue is locked.

Since interrupts must be blocked while this lock is asserted, tying them
together like this helps to prevent mistakes.
1999-05-24 19:10:57 +00:00
mrg
f1f95c374b implement madvice() for MADV_{NORMAL,RANDOM,SEQUENTIAL}, others are not yet done. 1999-05-23 06:27:13 +00:00
thorpej
f311a1c308 Make a slight modification of pmap_growkernel() -- it now returns the
end of the mappable kernel virtual address space.  Previously, it would
get called more often than necessary, because the caller only new what
was requested.

Also, export uvm_maxkaddr so that uvm_pageboot_alloc() can grow the
kernel pmap if necessary, as well.  Note that pmap_growkernel() must
now be able to handle being called before pmap_init().
1999-05-20 23:03:23 +00:00
thorpej
1d197b8e7b If we run out of virtual space in uvm_pageboot_alloc(), fail gracefully
rather than unpredictably.
1999-05-20 20:07:55 +00:00
chs
a5d3e8dae9 when wiring swap-backed pages, clear the PG_CLEAN flag before
releasing any swap resources.  if we don't do this, we can
end up with a clean, swap-backed page, which is illegal.
tracked down by Bill Sommerfeld, fixes PR 7578.
1999-05-19 06:14:15 +00:00
thorpej
c10a926030 Allow the caller to specify a stack for the child process. If NULL,
the child inherits the stack pointer from the parent (traditional
behavior).  Like the signal stack, the stack area is secified as
a low address and a size; machine-dependent code accounts for stack
direction.

This is required for clone(2).
1999-05-13 21:58:32 +00:00
thorpej
f5108f64e7 Add an optional pmap hook, pmap_fork(), to be called at the end of
uvmspace_fork().

pmap_fork() is used to "fork a pmap", that is copy data from one pmap
to the other that is NOT related to actual mappings in the pmap, but is
otherwise logically coupled to the address space.
1999-05-12 19:11:23 +00:00
mrg
fc7c17462c fix some formatting foo. 1999-05-03 09:08:28 +00:00
mrg
e378d35ade remove now-wrong comments. formatting nits. 1999-05-03 08:57:42 +00:00
mrg
c2f7cb3c4e remove now-wrong comment. formatting nit. 1999-05-03 08:53:24 +00:00
thorpej
2835fc6e46 Pull signal actions out of struct user, make them a separate proc
substructure, and allow them to be shared.

Required for clone(2).
1999-04-30 21:23:49 +00:00
chs
69ead14e9b in uvm_map_extract(), handle the case where the map entry being extracted
is large enough to cause the end address of the new entry to overflow.
1999-04-19 14:43:46 +00:00
chs
f455dd6596 add a `flags' argument to uvm_pagealloc_strat().
define a flag UVM_PGA_USERESERVE to allow non-kernel object
allocations to use pages from the reserve.
use the new flag for allocations in pmap modules.
1999-04-11 04:04:04 +00:00
drochner
3d6e675ba8 sanity: use ';' to separate statements 1999-04-08 10:26:21 +00:00
chs
039c17eca9 remove some old #if 0'd-out debugging code. 1999-03-30 16:07:47 +00:00
mycroft
99b341de15 Adjust a comparison so that the pagedaemon doesn't get stuck ping-ponging with
a process trying to allocate memory.
1999-03-30 10:12:01 +00:00
mycroft
671c65c6da Duuuh. Back and front pages should have an access_type of 0, since we don't
know they're going to be used.  What was I thinking??
1999-03-29 05:43:31 +00:00
mycroft
0ce76ca08b Reduce the access_type for copy-on-write pages in the front and back regions. 1999-03-28 21:48:50 +00:00
mycroft
8ed77cabd0 Fix a case I missed in the previous. 1999-03-28 21:01:25 +00:00
mycroft
4831b815f5 Only turn off VM_PROT_WRITE for COW pages; not VM_PROT_EXECUTE. 1999-03-28 19:53:49 +00:00
mycroft
31a2536cd0 Add a new `access type' argument to pmap_enter(). This indicates what type of
memory access a mapping was caused by.  This is passed through from uvm_fault()
and udv_fault(), and in most other cases is 0.
The pmap module may use this to preset R/M information.  On MMUs which require
R/M emulation, the implementation may preset the bits and avoid taking another
fault.  On MMUs which keep R/M information in hardware, the implementation may
preset its cached bits to speed up the next call to pmap_is_modified() or
pmap_is_referenced().
1999-03-26 21:58:39 +00:00
chs
d97d75d81b add uvmexp.swpgonly and use it to detect out-of-swap conditions. 1999-03-26 17:34:15 +00:00
chs
92045bbba9 add uvmexp.swpgonly and use it to detect out-of-swap conditions.
numerous pagedaemon improvements were needed to make this useful:
 - don't bother waking up procs waiting for memory if there's none to be had.
 - start 4 times as many pageouts as we need free pages.
   this should reduce latency in low-memory situations.
 - in inactive scanning, if we find dirty swap-backed pages when swap space
   is full of non-resident pages, reactivate some number of these to flush
   less active pages to the inactive queue so we can consider paging them out.
   this replaces the previous scheme of inactivating pages beyond the
   inactive target when we failed to free anything during inactive scanning.
 - during both active and inactive scanning, free any swap resources from
   dirty swap-backed pages if swap space is full.  this allows other pages
   be paged out into that swap space.
1999-03-26 17:33:30 +00:00
mrg
a0139bc39d remove now >1 year old pre-release message. 1999-03-25 18:48:49 +00:00
sommerfe
f1a508e354 Prevent deadlock cited in PR4629 from crashing the system. (copyout
and system call now just return EFAULT).  A complete fix will
presumably have to wait for UBC and/or for vnode locking protocols to
be revamped to allow use of shared locks.
1999-03-25 00:20:35 +00:00
cgd
9639d2bb98 modify udv_attach() and its caller (uvm_mmap()) so that it's passed the
offset and size of the requested region to be mapped, so that the
udv_attach() can use the device d_mmap() entry to check mappability
of the requested region.
1999-03-24 03:52:41 +00:00
cgd
37c88c58da after discussion with chuck, nuke pgo_attach from uvm_pagerops 1999-03-24 03:45:27 +00:00
chs
a65cf876d6 VHOLD() must be called at splbio() since HOLDRELE() is called
from the iodone handler.
1999-03-18 01:45:29 +00:00
chs
e2d0bfbb09 remove a debugging printf. 1999-03-15 07:55:19 +00:00
kleink
b0fe22c29d Have unimplemented/unsupported system calls (madvise(), mincore(), sbrk(),
sstk()) fail with ENOSYS.
1999-03-09 12:18:22 +00:00
chs
0c38ab98fa fix printf arg types. 1999-03-04 06:48:54 +00:00
chs
381e042ff6 fix printf format types. 1999-03-04 06:48:15 +00:00
mrg
3743c9e91d handle SWAP_DUMPDEV 1999-02-23 15:58:28 +00:00
mrg
08fd4f3f47 80 cols. 1999-01-31 09:27:18 +00:00
bouyer
b87a535a9f A small typo fix, + enclose "used_vnode_size = %qu" debug printf inside
#ifdef DEBUG/#endif
1999-01-29 12:56:17 +00:00
chuck
486cfd0e5a comment cleanup, shift around the inline stuff a bit,
rename VM_AMAP_PPREF (to UVM_AMAP_PPREF).
1999-01-28 14:46:27 +00:00
chuck
44f5fc2839 cleanup/reorg:
- break anon related functions out of uvm_amap.c and put them in their own
  file (uvm_anon.c).  includes break up uvm_anon_init into an amap and an
  an anon init function
- ensure that only functions within the amap module access amap structure
  fields (add macros to amap api as needed)
1999-01-24 23:53:14 +00:00
chs
0c2374e586 fix a precedence problem in uvm_mk_pcluster() which prevented
clustering of vnode pageouts.  this probably makes no difference
since most apps don't write via the pagecache anyway... yet.
1999-01-22 08:00:35 +00:00
marc
e21b4568e2 When a reference is made to a hole in a swap file, panic. The optimal
thing would be to allocate the block, but I don't know how to do this.
The panic is preferable to the random memory corruption the old code
was causing.
1998-12-26 06:25:59 +00:00
chuck
cc2f45083b update outdated an_swslot comments 1998-11-20 19:37:06 +00:00
mrg
d39ed4e552 check the return value of d_mmap before pmap_phys_address() gets hold of it. 1998-11-19 05:23:26 +00:00
chuck
281eb8b87a remove bogus permission check in uvm_map_clean(). fixes mmap/msync
problem discussed/reported by jonathan and Andreas Wrede <andreas@planix.com>.
1998-11-15 04:38:19 +00:00
mycroft
7c037b7009 Clear B_NOCACHE when we're done with the buffer -- although this is probably
pointless.
1998-11-08 19:41:49 +00:00
mycroft
6422baa1c0 Set the B_NOCACHE bit so that NFSv3 will not try to do async writes. 1998-11-08 19:37:12 +00:00
mrg
7c0d69c3ab minor KNF nits 1998-11-07 05:50:19 +00:00
chs
28411139b3 be consistent with locking of amaps and anons when freeing them. 1998-11-04 07:07:22 +00:00
chs
e4c4ea06b4 remove outdated comment. 1998-11-04 07:06:05 +00:00
chs
23ed4b5656 we must unlock a vp's object's lock before calling vrele(). 1998-11-04 06:21:40 +00:00
mrg
bba8470ccb KNF a missing bit. remove register. 1998-10-24 13:32:34 +00:00
tron
c71ccab136 Defopt SYSVMSG, SYSVSEM and SYSVSHM. 1998-10-19 22:21:19 +00:00
chs
549cd579e5 shift by PAGE_SHIFT instead of multiplying or dividing by PAGE_SIZE. 1998-10-18 23:49:59 +00:00
tv
000978aaca Check for gcc the Right way when quashing -Wuninitialized goop. 1998-10-16 19:34:57 +00:00
chuck
025ae6bd64 remove unused share map code from UVM:
- update calls to uvm_unmap_remove/uvm_unmap (mainonly boolean arg
	has been removed)
 - replace UVM_ET_ISMAP checks with UVM_ET_ISSUBMAP checks
1998-10-11 23:18:20 +00:00
chuck
03939069dc remove unused share map code from UVM:
- update uvm_faultinfo's rvaddr to orig_rvaddr to match changes from
	uvm_fault.h
1998-10-11 23:16:20 +00:00
chuck
2d4c15ebc9 remove unused share map code from UVM:
- replace map checks with submap checks
 - get rid of unused 'mainonly' arg in uvm_unmap/uvm_unmap_remove, simplify
	code.   update all calls to reflect this.
 - don't worry about unmapping or changing the protection of shared share
	map mappings (is_main_map no longer used).
 - remove unused uvm_map_sharemapcopy() function from fork code.
1998-10-11 23:14:47 +00:00
chuck
1b59a238c4 remove unused share map code from UVM:
- simplify uvm_faultinfo in uvm_fault.h (parent map tracking no longer needed)
 - adjust locking and lookup functions in uvm_fault_i.h to reflect the above
 - replace ufi.rvaddr with ufi.orig_rvaddr in uvm_fault.c since rvaddr is
	no longer needed.
 - no need to worry about share map translations in uvm_fault().  simplify.
1998-10-11 23:07:42 +00:00
chuck
8ffef382dd remove unused share map code from UVM:
- udv_fault() no longer has to worry about share map address translations
	on device faults.  simplify code.
1998-10-11 23:02:31 +00:00
chuck
a4d3b16d22 remove unused share map code from UVM:
dump UVM_ET_MAP/UVM_ET_ISMAP.   if you need to detect a submap use
  UVM_ET_SUBMAP/UVM_ET_ISSUBMAP.
1998-10-11 22:59:53 +00:00
chuck
495b6aafdc fix ppref botch. establish ppref at split time before we add the duplicate
reference.
1998-10-08 19:47:50 +00:00
mrg
fdc5499c5f back out previous. 1998-09-30 15:44:10 +00:00
tv
8219f068e2 Declare silent success on madvise(). As an advisory call, it is harmless
to pretend success even though it's not supported, and some emulations
rely on its success.
1998-09-30 12:07:51 +00:00
thorpej
feb1d22dcc NCPU > 1 -> MULTIPROCESSOR 1998-09-24 23:00:43 +00:00
thorpej
1e2aeb4a35 Add a comment documenting the last change. 1998-09-18 19:28:22 +00:00
thorpej
5dd4b45577 Don't use the nointr pool page allocator for the uao_swhash_elt pool. We
need to ensure that these come from a non-pageable kernel map, otherwise
we can run into a deadlock condition (as noticed by Chuck Silvers).
1998-09-18 19:27:20 +00:00
thorpej
28904fca48 Implement uvm_exit(), which frees VM resources when a process finishes
exiting.
1998-09-08 23:44:21 +00:00
pk
d2d3f83fd7 Panic instead failing the syscall on an impossible condition (from Robert Elz).
Plug possible memory leakage with the recently added device path stuff.
1998-09-06 23:09:39 +00:00
thorpej
38e7a08bed Allocate vm_anon arrays from kernel_map, not via MALLOC(). Helps relieve
much of UVM's kmem_map usage.
1998-08-31 02:43:14 +00:00
thorpej
d865961d77 Back out previous; I should have instrumented the benefit of this one
first.
1998-08-31 01:54:14 +00:00
thorpej
7338d4e403 Use the pool allocator and the "nointr" pool page allocator for vm_map's. 1998-08-31 01:50:08 +00:00
thorpej
be8d09cda3 Use the pool allocator and the "nointr" pool page allocator for dynamically
allocated vm_map_entry's.
1998-08-31 01:10:15 +00:00
thorpej
99626224a7 Use the pool allocator and the "nointr" pool page allocator for vmspace
structures.
1998-08-31 00:20:26 +00:00
thorpej
694e9583aa Make sure the aobj_pager gets initialized! 1998-08-31 00:03:02 +00:00
thorpej
5a4981d9b8 Use the pool allocator w/ the "nointr" pool page allocator for uvm_aobj
and uao_swhash_elt structures.  Also, fix a bug in uao_set_swlot() where
if setting the swslot to 0 (freeing swap resources), and no swslot was
currently allocated, a new entry would be allocated anyhow (revealed during
pool'ification).
1998-08-31 00:01:59 +00:00
enami
71ba20edbb Define `len' as size_t rather than int so that correct type is passed
as fourth argument of copystr.
1998-08-30 03:08:43 +00:00
mrg
edda33e00c move <vm/vm_swap.h> to <sys/swap.h>. <vm/vm_swap.h> still works for now (goes away later) 1998-08-29 17:01:14 +00:00
mrg
b5f69ff667 add a `char se_path[PATH_MAX]' member to struct swapent, that
the pathname of the swap device is saved into.  add a char *swd_path
member to struct swapdev, that contains a copy of the pathname
(using malloc(9)).  rename swapctl(2)'s SWAP_STATS to SWAP_OSTATS,
and add a new SWAP_STATS command (number).  make swapctl(SWAP_STATS,
...) [new version] copy the path out.  if COMPAT_13, also include
support for SWAP_OSTATS.  also fix a minor bug in swapctl(2).

the point of this is that swapfiles are now shown in `swapctl -l'.
1998-08-29 13:27:50 +00:00
thorpej
e554af53c2 Use the pool allocator (and the "nointr" pool page allocator) for
vm_amap structures.
1998-08-29 01:05:28 +00:00
thorpej
7cad30cd22 Add a couple of comments about how the pool page allocator functions
can be called with a map that doens't require spl protection.
1998-08-28 21:16:23 +00:00
thorpej
77d0a69569 Add a waitok boolean argument to the VM system's pool page allocator backend. 1998-08-28 20:05:48 +00:00
drochner
9b25897ec0 minor consistency nit: the page index into an anon object is always
assigned to from integer types, and it is compared to integers. So
let it be an integer instead of vsize_t.
1998-08-13 17:32:46 +00:00
eeh
a2dd74ed79 Merge paddr_t changes into the main branch. 1998-08-13 02:10:37 +00:00
perry
2c8717021d bzero->memset, bcopy->memcpy, bcmp->memcmp 1998-08-09 22:36:37 +00:00
thorpej
45d17e02f7 We need to be able to specify a uvm_object to the pool page allocator, too. 1998-08-01 01:39:03 +00:00
thorpej
55bf1fd9ad Allow an alternate splimp-protected map to be specified in the pool page
allocator routines.
1998-07-31 20:46:36 +00:00
thorpej
e95c22ee96 Don't cast the null residual pointer passed to vn_rdwr(). 1998-07-28 18:17:34 +00:00
thorpej
8325d058bf Implement uvm_km_{alloc,free}_poolpage(). These functions use pmap hooks to
map/unmap pool pages if provided by the pmap layer.
1998-07-24 20:28:48 +00:00
thorpej
7d8833a179 Put back swap_data_lock, which was apparently deleted accidentally during
the last round of changes.  (I noticed it because I run my kernels w/
LOCKDEBUG.)
1998-07-24 18:46:02 +00:00
pk
363f729ada Use memory pools to allocate swap buffers. Allocations are all dynamic;
in particular `nswbuf' is gone, as is the private "struct buf" list that
was previously maintained in here.
1998-07-23 20:51:09 +00:00
pk
20f05a4bb5 Include pool_drain() in page scans. 1998-07-23 20:36:09 +00:00
pk
f640e832b1 Make sure to release buffers only once. 1998-07-08 18:41:24 +00:00
thorpej
7fd701e0fa Add support for multiple memory free lists. There is at least one
default free list, and 0 - N additional free list, in order of descending
priority.

A new page allocation function, uvm_pagealloc_strat(), has been added,
providing three page allocation strategies:

	- normal: high -> low priority free list walk, taking the
	  page off the first free list that has one.

	- only: attempt to allocate a page only from the specified free
	  list, failing if that free list has none available.

	- fallback: if `only' fails, fall back on `normal'.

uvm_pagealloc(...) is provided for normal use (and is a synonym for
uvm_pagealloc_strat(..., UVM_PGA_STRAT_NORMAL, 0); the free list argument
is ignored for the `normal' case).

uvm_page_physload() now specified which free list the pages will be
loaded onto.  This means that some platforms which have multiple physical
memory segments may define additional vm_physsegs if they wish to break
individual physical segments into differing priorities.

Machine-dependent code must define _at least_ the following constants
in <machine/vmparam.h>:

	VM_NFREELIST: the number of free lists the system will have

	VM_FREELIST_DEFAULT: the default freelist (should always be 0,
	but is defined in machdep code so that it's with all of the
	other free list-related constants).

Additional free list names may be defined by machine-dependent code, but
they will only be used by machine-dependent code (e.g. for loading the
vm_physsegs).
1998-07-08 04:28:27 +00:00
thorpej
8d986de632 Add support for mmap'ing disk block devices. 1998-07-07 23:22:13 +00:00
jonathan
466e784ee1 defopt DDB. 1998-07-04 22:18:13 +00:00
pk
3d29b1e56c Shield `#include opt_*.h'. 1998-07-04 08:44:04 +00:00
sommerfe
7ba7fbbb23 Always include fifos; "not an option any more". 1998-06-24 20:58:44 +00:00
sommerfe
becaafeea0 defopt for options FIFO 1998-06-22 22:00:59 +00:00
mrg
f63fe467c9 Add new history grovelling function uvm_hist() that takes a bitmask of
histories to merge in cronological order.  currently, MAPHIST and
PDHIST are defined as 1 and 2 respectively.  passing a bitmask of 0
to uvm_hist() will dump all maps.
1998-06-20 13:19:00 +00:00
mrg
f51ed1e3f8 add a "<-done!" log 1998-06-20 13:16:29 +00:00
ross
ebcb47db53 Correct an expression that tried to compute the swap size in bytes using
an int object, this sometimes prevented swap_on() of a dev/file > 2^31 bytes.
1998-06-17 07:38:28 +00:00
cgd
651b44e211 Rework the way kernel include files are installed. In the new method,
as with user-land programs, include files are installed by each directory
in the tree that has includes to install.  (This allows more flexibility
as to what gets installed, makes 'partial installs' easier, and gives us
more options as to which machines' includes get installed at any given
time.)  The old SYS_INCLUDES={symlinks,copies} behaviours are _both_
still supported, though at least one bug in the 'symlinks' case is
fixed by this change.  Include files can't be build before installation,
so directories that have includes as targets (e.g. dev/pci) have to move
those targets into a different Makefile.
1998-06-12 23:22:30 +00:00
chs
a5550009e6 correct counting for uvmexp.wired:
only pages explicitly wired by a user process should be counted.
1998-06-09 05:18:52 +00:00
mark
7689b22688 Use the sparc's GCC lossage fix for the arm32 port as well. Problem appears
to be a compiler bug resulting in an 'variable possibly used uninitialised'
warning when optimisation is used.
1998-06-02 20:51:24 +00:00
kleink
bb7e6a0bdd Per XSH98, const'ify the `addr' arguments to mlock() and munlock(). 1998-05-30 22:21:03 +00:00
chuck
07c8bdc65f unstatic uvm_page_physload so pmap modules can use it too.
as requested by Eduardo E. Horvath
1998-05-28 15:31:31 +00:00
chuck
08a4f7fa4c fix bug in uvm_map_extract, remove case. make sure we update the loop
variable before removing the entry from the map.
[bug was not causing problems because the remove case isn't currently
 being used ...]
1998-05-22 02:01:54 +00:00
thorpej
ad7a87400a defopt LOCKDEBUG 1998-05-20 01:32:29 +00:00
pk
df238837b0 No dummy locks if LOCKDEBUG. 1998-05-18 15:00:50 +00:00
chuck
d6fddd553f detect ending VA wrap-around in the chunking code of amap_copy.
fixes problem reported by Ken Nakata <kenn@synap.ne.jp> on the mac68k
where the stack amap chunking caused entry->end to wrap around to zero,
thus corrupting the map entry list and causing kmem_map to fill.
1998-05-14 13:51:28 +00:00
mrg
6b11eea5b2 reject attempts to map an immutable or append-only file, shared with
write protection.  this stops data corruption where it was possible
to change the in-memory copy of an append-only file (but not the on-disk
copy).  this is documented in NetBSD security advisory 1998-003.  thanks
to darrenr, lukem, cgd, mycroft and mrg for this.
1998-05-10 12:35:58 +00:00
kleink
6fa43ba824 Minor KNF. 1998-05-09 15:05:50 +00:00
kleink
afeaa5bb57 Use size_t to pass the length of the memory region to operate on to chgkprot(),
kernacc(), useracc(), vslock() and vsunlock(); (unsigned) ints are not
adequate on all platforms.
1998-05-09 15:04:39 +00:00
kleink
d9066c40e9 Make uvm_vsunlock() actually use the proc * passed to it; per discussion
with Jason Thorpe.
1998-05-08 17:41:41 +00:00
kleink
182e12f413 Remove inclusions of syscall (and syscall argument) related header files;
we don't need them here.
1998-05-05 20:51:04 +00:00
mrg
dae55b7b7b fix a problem with swapping to files where a new variable introduced was not
later incremented correctedly, causing the wrong data to be paged out, which
then caused general lossage later when the data was paged in and the process
tried to use it.  found by pk.
1998-05-01 01:40:02 +00:00
thorpej
73863dd3c9 Pass vslock() and vsunlock() a proc *, rather than implicitly operating
on curproc.
1998-04-30 06:28:57 +00:00
matthias
fceafcf990 port-pc532 now has pmap_new just like port-i386. 1998-04-25 19:58:58 +00:00
thorpej
339c715a9e Fix small whitespace botch. 1998-04-16 03:54:35 +00:00
thorpej
fe97b1da8e Oops, fix a typo. 1998-04-09 00:24:05 +00:00
thorpej
2018d40811 Allocate kernel virtual address space for the U-area before allocating
the new proc structure when performing a fork.  This makes it much
easier to abort a fork operation and return an error if we run out
of KVA space.

The U-area pages are still wired down in {,u}vm_fork(), as before.
1998-04-09 00:23:38 +00:00
tv
39b4c2fece mmap() default MAP_SHARED/MAP_PRIVATE is `DEBUG'', not `DIAGNOSTIC'' 1998-04-01 21:43:52 +00:00
chuck
9eb2927bec free correct page in incomplete section of MNN, as pointed
out by Soren S. Jorvang.
1998-03-31 03:04:59 +00:00
chuck
fe4846acdc have ddb show map print resident page count 1998-03-30 17:34:58 +00:00
mycroft
0652b9af01 Mark scheduler() and uvm_scheduler() as never returning. 1998-03-30 06:24:42 +00:00
kleink
6618749e5a Per XPG, if the file descriptor argument to mmap() refers to a file whose
type is not supported (neither VREG nor VCHR, or not a vnode at all), fail
with ENODEV instead of EINVAL.
1998-03-28 16:58:04 +00:00
thorpej
b65c510879 Split uvmspace_alloc() into uvmspace_alloc() and uvmspace_init(). The latter
can be used for initializing a pre-allocated vmspace.
1998-03-27 01:47:06 +00:00
chuck
db81cc9c76 update per-process rusage fault counters (ru_majflt/ru_minflt) under UVM 1998-03-26 21:50:14 +00:00
chuck
e6da5a01e4 remove tmpwire arg from uvm_pagewire() -- it isn't needed anymore.
noted by chuck s.
1998-03-22 21:29:30 +00:00
chuck
617118bccc fix released pg bugs detected by Chuck S.:
- release the correct page (ppsp[lcv], not pg)
 - don't access the page's fields after we have released it
 - in the uvm_objct case: move on to the next page if we've released
[should have been merged in on 1998/02/12, but we somehow missed it]
1998-03-22 16:10:29 +00:00
chuck
dd370f11a3 rework the copy inheritance case of fork. the old way did not handle
the very rare case of shared mappings that have amap's attached in a
reasonable way -- this is not currently causing any problems, but i
fixed it anyway.  update the comment in this section of code and also
be smarter about avoiding needless calls to pmap_protect().
1998-03-19 19:26:50 +00:00
thorpej
dbc7bbee68 Make the previous change `atomic'. 1998-03-19 06:37:26 +00:00
thorpej
daade671ae When unsharing or execing, deactivate the old vmspace before reassigning
and activating the new one.  Pointed out by Chris Demetriou.
1998-03-19 04:19:21 +00:00
mrg
be92b169f8 oops, missed a bit of KNF here. 1998-03-17 07:50:08 +00:00
chuck
927ec8b012 bug fix: when doing uvm_vnp_sync() actually skip over blocked uvn's so
that we don't try and sync them later.   should get rid of the
"uvm_vnp_sync: dying vnode on sync list" related warnings that were
occuring during a "make install."
1998-03-11 01:37:40 +00:00
chuck
21624aaf72 uvm_dump now dumps some important pointers for debugging 1998-03-10 14:36:55 +00:00
mrg
8106d13596 KNF. 1998-03-09 00:58:55 +00:00
mycroft
24e6e6a0e7 Convert MAP_PRIVATE device mappings to MAP_SHARED on *all* platforms, not just
the SPARC.
Remove the #ifdef COMPAT_13 for automatically adding a sharing type, since the
interface is *supposed* to support this.
Also modify the DIAGNOSTIC messages here a bit.
1998-03-03 14:34:10 +00:00
fvdl
e5bc90f40c Merge with Lite2 + local changes 1998-03-01 02:20:01 +00:00
chuck
cbd05b1537 be consistent about offsets in kernel objects. vm_map_min(kernel_map)
should always be the base [fixes problem on m68k detected by jason thorpe]

add comments to uvm_km.c explaining kernel memory management in more detail
1998-02-24 15:58:09 +00:00
thorpej
1e3e1bfe09 Include the NFS option header. 1998-02-19 00:55:04 +00:00
drochner
93a065690b fix map range boundary check 1998-02-18 14:50:32 +00:00
mrg
3ed2e6ac6c bug fix from chuck: uvm_vnp_terminate panic when /sbin/init was unlinked 1998-02-18 06:35:46 +00:00
thorpej
550678e57f Oops, fix a typo. 1998-02-13 05:34:30 +00:00
thorpej
e6c31d3db7 KNF. 1998-02-13 05:33:55 +00:00
thorpej
872181c2f2 A few changes to make it possible to read UVM histories from userland:
- Protect option headers from inclusion if ! _KERNEL or if _LKM.
- Make sure struct uvm_history is always the same size (not dependent
  on NCPU).
- Add fmtlen and fnlen members to struct uvm_history_ent, which specify
  the lengths fo the fmt and fn strings.
- Add name, namelen, and a list entry to struct uvm_history.
- When a history is initialized, place it on the global list of all histories.
1998-02-13 04:55:14 +00:00
thorpej
cb5f8ef1df Add a global list of all UVM histories. 1998-02-13 04:52:00 +00:00
thorpej
90aee42d35 Provide a patchable knob (uvmhist_print_enabled) so that UVM history
buffer printing can be switched on and off at run-time.  Only exists
if the kernel is build with UVMHIST_PRINT, and defaults to `on'.
1998-02-12 20:10:15 +00:00
chs
7f45dbdfae add copyright. 1998-02-12 07:36:43 +00:00
mrg
d90485202c - add defopt's for UVM, UVMHIST and PMAP_NEW.
- remove unnecessary UVMHIST_DECL's.
1998-02-10 14:08:44 +00:00
perry
021fdb646a add/cleanup multiple inclusion protection. 1998-02-10 02:34:17 +00:00
mrg
e92c7d991e KNF. 1998-02-09 14:35:48 +00:00
mrg
7d3aef40b3 keep statistics on pageout/pagein, total pages, and total operations. 1998-02-09 13:08:22 +00:00
mrg
3112d4b3e1 KNF. 1998-02-09 04:05:36 +00:00
mrg
7ff12d37cc fill out vmtotals: t_free, t_vm, t_avm, t_rm and t_arm. leaves shared of same, and t_pw. 1998-02-08 22:23:33 +00:00
thorpej
39f8b8c99b Round allocations to page size in uvm_pageboot_alloc(). 1998-02-08 18:27:30 +00:00
mrg
6122fae970 KNF 1998-02-08 16:07:57 +00:00
mrg
bc3395e590 turn of UVM history logging by default. 1998-02-08 14:19:21 +00:00
mrg
d9b2f81e27 move pdhist initialisation to the same place as maphist. also, declare
the history buffers are "struct uvm_history_ent" to ensure proper
alignment (eg, alpha).  this fixes a boottime panic when the pdhist was
used before it had been initialised.
1998-02-08 07:52:28 +00:00
thorpej
1305ecbe62 Allow callers of uvm_km_suballoc() to specify where the base of the
submap _must_ begin, by adding a "fixed" boolean argument.
1998-02-08 06:15:53 +00:00
mrg
0a058cb62f implement counters for pages paged in/out 1998-02-07 17:00:36 +00:00
mrg
4ef57d4d22 KNF. 1998-02-07 12:45:53 +00:00
mrg
5e55ce6648 bzero the entire vmspace, like the old vm does. makes ps report sane values of VSZ for swapper/pagedaemon 1998-02-07 12:31:32 +00:00
mrg
1f6b921cf7 restore rcsids 1998-02-07 11:07:38 +00:00
chs
9b371040ea keep track of how many pages are currently being paged out,
stop initiating new pageouts when "(free + paging) > freetarg".
fix pageq locking.
1998-02-07 02:35:11 +00:00
chs
c2f8ffc062 reserve some pages for the kernel, and some more especially
for the pagedaemon allocating from kmem_object.  this should
prevent from the pagedaemon running out of memory and deadlocking.
fix counting of wired pages.
add some debugging code to detect attempts to reference free vm_pages.
1998-02-07 02:34:08 +00:00
chs
249efd73a1 enable hashtables for swapslot storage - deadlock is fixed.
fix initialization of swhash entries.
use malloc(M_NOWAIT) for creating kernel object.
avoid dereferencing a vm_page once the page has been freed.
1998-02-07 02:32:37 +00:00
chs
39c12db74f declare aobj_pager, needed in uvm_km.c. 1998-02-07 02:31:06 +00:00
chs
c82ac447df convert kernel_object to an aobj.
in uvm_km_pgremove(), free swapslots if the object is an aobj.
in uvm_km_kmemalloc(), mark pages as wired and count them.
1998-02-07 02:29:21 +00:00
chs
6376c02019 enable paging of kernel_object. 1998-02-07 02:26:46 +00:00
chs
732a925b1b add locking of kernel_map in uvm_kernacc().
check return value of uvm_fault_wire() in uvm_fork().
enable swappings.
1998-02-07 02:26:04 +00:00
chs
29ec5fd8d5 prototype for uvm_map_checkprot() moved here.
add uvmexp fields for pagouts-in-progress and kernel-reserved pages.
1998-02-07 02:24:02 +00:00
chs
273ac223ec prototype for uvm_map_checkprot() moved to uvm_extern.h. 1998-02-07 02:22:24 +00:00
chs
21e2cac359 fix typoes in locking.
use M_UVMAMAP instead of M_TEMP for malloc type.
1998-02-07 02:21:29 +00:00
chs
5a7c4f2caa don't try to relock amap if there isn't one. 1998-02-07 02:19:55 +00:00
chs
7cb9f7e5b1 rearrange a bit for clarity. 1998-02-07 02:18:27 +00:00
chs
098b8c2420 fix typoes in locking. 1998-02-07 02:17:48 +00:00
chs
1f583a43b1 remove locking from UVMCNT counters.
they don't need to be exact, and the locking causes problems
in some of places they're used.
1998-02-07 02:16:52 +00:00
thorpej
9eb328b495 RCS ID police. 1998-02-06 22:26:13 +00:00