Commit Graph

133 Commits

Author SHA1 Message Date
chs 28411139b3 be consistent with locking of amaps and anons when freeing them. 1998-11-04 07:07:22 +00:00
chs e4c4ea06b4 remove outdated comment. 1998-11-04 07:06:05 +00:00
chs 23ed4b5656 we must unlock a vp's object's lock before calling vrele(). 1998-11-04 06:21:40 +00:00
mrg bba8470ccb KNF a missing bit. remove register. 1998-10-24 13:32:34 +00:00
tron c71ccab136 Defopt SYSVMSG, SYSVSEM and SYSVSHM. 1998-10-19 22:21:19 +00:00
chs 549cd579e5 shift by PAGE_SHIFT instead of multiplying or dividing by PAGE_SIZE. 1998-10-18 23:49:59 +00:00
tv 000978aaca Check for gcc the Right way when quashing -Wuninitialized goop. 1998-10-16 19:34:57 +00:00
chuck 025ae6bd64 remove unused share map code from UVM:
- update calls to uvm_unmap_remove/uvm_unmap (mainonly boolean arg
	has been removed)
 - replace UVM_ET_ISMAP checks with UVM_ET_ISSUBMAP checks
1998-10-11 23:18:20 +00:00
chuck 03939069dc remove unused share map code from UVM:
- update uvm_faultinfo's rvaddr to orig_rvaddr to match changes from
	uvm_fault.h
1998-10-11 23:16:20 +00:00
chuck 2d4c15ebc9 remove unused share map code from UVM:
- replace map checks with submap checks
 - get rid of unused 'mainonly' arg in uvm_unmap/uvm_unmap_remove, simplify
	code.   update all calls to reflect this.
 - don't worry about unmapping or changing the protection of shared share
	map mappings (is_main_map no longer used).
 - remove unused uvm_map_sharemapcopy() function from fork code.
1998-10-11 23:14:47 +00:00
chuck 1b59a238c4 remove unused share map code from UVM:
- simplify uvm_faultinfo in uvm_fault.h (parent map tracking no longer needed)
 - adjust locking and lookup functions in uvm_fault_i.h to reflect the above
 - replace ufi.rvaddr with ufi.orig_rvaddr in uvm_fault.c since rvaddr is
	no longer needed.
 - no need to worry about share map translations in uvm_fault().  simplify.
1998-10-11 23:07:42 +00:00
chuck 8ffef382dd remove unused share map code from UVM:
- udv_fault() no longer has to worry about share map address translations
	on device faults.  simplify code.
1998-10-11 23:02:31 +00:00
chuck a4d3b16d22 remove unused share map code from UVM:
dump UVM_ET_MAP/UVM_ET_ISMAP.   if you need to detect a submap use
  UVM_ET_SUBMAP/UVM_ET_ISSUBMAP.
1998-10-11 22:59:53 +00:00
chuck 495b6aafdc fix ppref botch. establish ppref at split time before we add the duplicate
reference.
1998-10-08 19:47:50 +00:00
mrg fdc5499c5f back out previous. 1998-09-30 15:44:10 +00:00
tv 8219f068e2 Declare silent success on madvise(). As an advisory call, it is harmless
to pretend success even though it's not supported, and some emulations
rely on its success.
1998-09-30 12:07:51 +00:00
thorpej feb1d22dcc NCPU > 1 -> MULTIPROCESSOR 1998-09-24 23:00:43 +00:00
thorpej 1e2aeb4a35 Add a comment documenting the last change. 1998-09-18 19:28:22 +00:00
thorpej 5dd4b45577 Don't use the nointr pool page allocator for the uao_swhash_elt pool. We
need to ensure that these come from a non-pageable kernel map, otherwise
we can run into a deadlock condition (as noticed by Chuck Silvers).
1998-09-18 19:27:20 +00:00
thorpej 28904fca48 Implement uvm_exit(), which frees VM resources when a process finishes
exiting.
1998-09-08 23:44:21 +00:00
pk d2d3f83fd7 Panic instead failing the syscall on an impossible condition (from Robert Elz).
Plug possible memory leakage with the recently added device path stuff.
1998-09-06 23:09:39 +00:00
thorpej 38e7a08bed Allocate vm_anon arrays from kernel_map, not via MALLOC(). Helps relieve
much of UVM's kmem_map usage.
1998-08-31 02:43:14 +00:00
thorpej d865961d77 Back out previous; I should have instrumented the benefit of this one
first.
1998-08-31 01:54:14 +00:00
thorpej 7338d4e403 Use the pool allocator and the "nointr" pool page allocator for vm_map's. 1998-08-31 01:50:08 +00:00
thorpej be8d09cda3 Use the pool allocator and the "nointr" pool page allocator for dynamically
allocated vm_map_entry's.
1998-08-31 01:10:15 +00:00
thorpej 99626224a7 Use the pool allocator and the "nointr" pool page allocator for vmspace
structures.
1998-08-31 00:20:26 +00:00
thorpej 694e9583aa Make sure the aobj_pager gets initialized! 1998-08-31 00:03:02 +00:00
thorpej 5a4981d9b8 Use the pool allocator w/ the "nointr" pool page allocator for uvm_aobj
and uao_swhash_elt structures.  Also, fix a bug in uao_set_swlot() where
if setting the swslot to 0 (freeing swap resources), and no swslot was
currently allocated, a new entry would be allocated anyhow (revealed during
pool'ification).
1998-08-31 00:01:59 +00:00
enami 71ba20edbb Define `len' as size_t rather than int so that correct type is passed
as fourth argument of copystr.
1998-08-30 03:08:43 +00:00
mrg edda33e00c move <vm/vm_swap.h> to <sys/swap.h>. <vm/vm_swap.h> still works for now (goes away later) 1998-08-29 17:01:14 +00:00
mrg b5f69ff667 add a `char se_path[PATH_MAX]' member to struct swapent, that
the pathname of the swap device is saved into.  add a char *swd_path
member to struct swapdev, that contains a copy of the pathname
(using malloc(9)).  rename swapctl(2)'s SWAP_STATS to SWAP_OSTATS,
and add a new SWAP_STATS command (number).  make swapctl(SWAP_STATS,
...) [new version] copy the path out.  if COMPAT_13, also include
support for SWAP_OSTATS.  also fix a minor bug in swapctl(2).

the point of this is that swapfiles are now shown in `swapctl -l'.
1998-08-29 13:27:50 +00:00
thorpej e554af53c2 Use the pool allocator (and the "nointr" pool page allocator) for
vm_amap structures.
1998-08-29 01:05:28 +00:00
thorpej 7cad30cd22 Add a couple of comments about how the pool page allocator functions
can be called with a map that doens't require spl protection.
1998-08-28 21:16:23 +00:00
thorpej 77d0a69569 Add a waitok boolean argument to the VM system's pool page allocator backend. 1998-08-28 20:05:48 +00:00
drochner 9b25897ec0 minor consistency nit: the page index into an anon object is always
assigned to from integer types, and it is compared to integers. So
let it be an integer instead of vsize_t.
1998-08-13 17:32:46 +00:00
eeh a2dd74ed79 Merge paddr_t changes into the main branch. 1998-08-13 02:10:37 +00:00
perry 2c8717021d bzero->memset, bcopy->memcpy, bcmp->memcmp 1998-08-09 22:36:37 +00:00
thorpej 45d17e02f7 We need to be able to specify a uvm_object to the pool page allocator, too. 1998-08-01 01:39:03 +00:00
thorpej 55bf1fd9ad Allow an alternate splimp-protected map to be specified in the pool page
allocator routines.
1998-07-31 20:46:36 +00:00
thorpej e95c22ee96 Don't cast the null residual pointer passed to vn_rdwr(). 1998-07-28 18:17:34 +00:00
thorpej 8325d058bf Implement uvm_km_{alloc,free}_poolpage(). These functions use pmap hooks to
map/unmap pool pages if provided by the pmap layer.
1998-07-24 20:28:48 +00:00
thorpej 7d8833a179 Put back swap_data_lock, which was apparently deleted accidentally during
the last round of changes.  (I noticed it because I run my kernels w/
LOCKDEBUG.)
1998-07-24 18:46:02 +00:00
pk 363f729ada Use memory pools to allocate swap buffers. Allocations are all dynamic;
in particular `nswbuf' is gone, as is the private "struct buf" list that
was previously maintained in here.
1998-07-23 20:51:09 +00:00
pk 20f05a4bb5 Include pool_drain() in page scans. 1998-07-23 20:36:09 +00:00
pk f640e832b1 Make sure to release buffers only once. 1998-07-08 18:41:24 +00:00
thorpej 7fd701e0fa Add support for multiple memory free lists. There is at least one
default free list, and 0 - N additional free list, in order of descending
priority.

A new page allocation function, uvm_pagealloc_strat(), has been added,
providing three page allocation strategies:

	- normal: high -> low priority free list walk, taking the
	  page off the first free list that has one.

	- only: attempt to allocate a page only from the specified free
	  list, failing if that free list has none available.

	- fallback: if `only' fails, fall back on `normal'.

uvm_pagealloc(...) is provided for normal use (and is a synonym for
uvm_pagealloc_strat(..., UVM_PGA_STRAT_NORMAL, 0); the free list argument
is ignored for the `normal' case).

uvm_page_physload() now specified which free list the pages will be
loaded onto.  This means that some platforms which have multiple physical
memory segments may define additional vm_physsegs if they wish to break
individual physical segments into differing priorities.

Machine-dependent code must define _at least_ the following constants
in <machine/vmparam.h>:

	VM_NFREELIST: the number of free lists the system will have

	VM_FREELIST_DEFAULT: the default freelist (should always be 0,
	but is defined in machdep code so that it's with all of the
	other free list-related constants).

Additional free list names may be defined by machine-dependent code, but
they will only be used by machine-dependent code (e.g. for loading the
vm_physsegs).
1998-07-08 04:28:27 +00:00
thorpej 8d986de632 Add support for mmap'ing disk block devices. 1998-07-07 23:22:13 +00:00
jonathan 466e784ee1 defopt DDB. 1998-07-04 22:18:13 +00:00
pk 3d29b1e56c Shield `#include opt_*.h'. 1998-07-04 08:44:04 +00:00
sommerfe 7ba7fbbb23 Always include fifos; "not an option any more". 1998-06-24 20:58:44 +00:00