Commit Graph

39 Commits

Author SHA1 Message Date
thorpej 72a24b4eae Add an align argument to uvm_map() and some callers of that
routine.  Works similarly fto pmap_prefer(), but allows callers
to specify a minimum power-of-two alignment of the region.
How we ever got along without this for so long is beyond me.
2000-09-13 15:00:15 +00:00
jeffs 05dfa34e69 Add uvm_km_valloc_prefer_wait(). Used to valloc with the passed in
voff_t being passed to PMAP_PREFER(), which results in the propper
virtual alignment of the allocated space.
2000-07-24 20:10:51 +00:00
mrg dea44a9ec4 remove include of <vm/vm.h> 2000-06-27 17:29:17 +00:00
mrg 2f159a1bac remove/move more mach vm header files:
<vm/pglist.h> -> <uvm/uvm_pglist.h>
	<vm/vm_inherit.h> -> <uvm/uvm_inherit.h>
	<vm/vm_kern.h> -> into <uvm/uvm_extern.h>
	<vm/vm_object.h> -> nothing
	<vm/vm_pager.h> -> into <uvm/uvm_pager.h>

also includes a bunch of <vm/vm_page.h> include removals (due to redudancy
with <vm/vm.h>), and a scattering of other similar headers.
2000-06-26 14:20:25 +00:00
thorpej 39294d89e5 __predict_false() out-of-resource conditions and DIAGNOSTIC error checks. 2000-05-08 23:10:20 +00:00
chs 16f0ca3612 add support for ``swapctl -d'' (removing swap space).
improve handling of i/o errors in swap space.

reviewed by:  Chuck Cranor
2000-01-11 06:57:49 +00:00
thorpej 1da427a80a Change the pmap_enter() API slightly; pmap_enter() now returns an error
value (KERN_SUCCESS or KERN_RESOURCE_SHORTAGE) indicating if it succeeded
or failed.  Change the `wired' and `access_type' arguments to a single
`flags' argument, which includes the access type, and flags:

	PMAP_WIRED	the old `wired' boolean
	PMAP_CANFAIL	pmap_enter() is allowed to fail

If PMAP_CANFAIL is not specified, the pmap should behave as it always
has in the face of a drastic resource shortage: fall over dead.

Change the fault handler to deal with failure (which indicates resource
shortage) by unlocking everything, waiting for the pagedaemon to free
more memory, then retrying the fault.
1999-11-13 00:24:38 +00:00
chs f3a668ed84 eliminate the PMAP_NEW option by making it required for all ports.
ports which previously had no support for PMAP_NEW now implement
the pmap_k* interfaces as wrappers around the non-k versions.
1999-09-12 01:16:55 +00:00
thorpej 3f176180d5 Garbage collect thread_sleep()/thread_wakeup() left over from the old
Mach VM code.  Also nuke iprintf(), which was no longer used anywhere.

Add proclist locking where appropriate.
1999-07-22 22:58:38 +00:00
thorpej 2c668fb0d4 0 -> FALSE in a few places. 1999-07-22 21:27:32 +00:00
chs a8f10f9e37 allow uvm_km_alloc_poolpage1() to use kernel-reserve pages. 1999-07-18 22:55:30 +00:00
thorpej fcc55e7687 Garbage-collect uvm_km_get(); nothing actually uses it. 1999-07-17 06:41:36 +00:00
thorpej 2c3dc83a64 Keep interrupt-safe maps on an additional queue. In uvm_fault(), if we're
looking up a kernel address, check to see if the address is on this
"interrupt-safe" list.  If so, return failure immediately.  This prevents
a locking screw if a page fault is taken on an interrupt-safe map in or
out of interrupt context.
1999-06-04 23:38:41 +00:00
thorpej 6b655611b1 Wired kernel mappings are wired; pass VM_PROT_READ|VM_PROT_WRITE for
access_type to pmap_enter() to ensure that when these mappings are accessed,
possibly in interrupt context, that they won't cause mod/ref emulation
page faults.
1999-05-26 19:27:49 +00:00
thorpej 2580d306ab Change the vm_map's "entries_pageable" member to a r/o flags member, which
has PAGEABLE and INTRSAFE flags.  PAGEABLE now really means "pageable",
not "allocate vm_map_entry's from non-static pool", so update all map
creations to reflect that.  INTRSAFE maps are maps that are used in
interrupt context (e.g. kmem_map, mb_map), and thus use the static
map entry pool (XXX as does kernel_map, for now).  This will eventually
change now these maps are locked, as well.
1999-05-26 19:16:28 +00:00
thorpej 0ff8d3ac1a Define a new kernel object type, "intrsafe", which are used for objects
which can be used in an interrupt context.  Use pmap_kenter*() and
pmap_kremove() only for mappings owned by these objects.

Fixes some locking protocol issues related to MP support, and eliminates
all of the pmap_enter vs. pmap_kremove inconsistencies.
1999-05-25 20:30:08 +00:00
chs f455dd6596 add a `flags' argument to uvm_pagealloc_strat().
define a flag UVM_PGA_USERESERVE to allow non-kernel object
allocations to use pages from the reserve.
use the new flag for allocations in pmap modules.
1999-04-11 04:04:04 +00:00
mycroft 31a2536cd0 Add a new `access type' argument to pmap_enter(). This indicates what type of
memory access a mapping was caused by.  This is passed through from uvm_fault()
and udv_fault(), and in most other cases is 0.
The pmap module may use this to preset R/M information.  On MMUs which require
R/M emulation, the implementation may preset the bits and avoid taking another
fault.  On MMUs which keep R/M information in hardware, the implementation may
preset its cached bits to speed up the next call to pmap_is_modified() or
pmap_is_referenced().
1999-03-26 21:58:39 +00:00
chs d97d75d81b add uvmexp.swpgonly and use it to detect out-of-swap conditions. 1999-03-26 17:34:15 +00:00
mrg a0139bc39d remove now >1 year old pre-release message. 1999-03-25 18:48:49 +00:00
cgd 37c88c58da after discussion with chuck, nuke pgo_attach from uvm_pagerops 1999-03-24 03:45:27 +00:00
chs 549cd579e5 shift by PAGE_SHIFT instead of multiplying or dividing by PAGE_SIZE. 1998-10-18 23:49:59 +00:00
chuck 025ae6bd64 remove unused share map code from UVM:
- update calls to uvm_unmap_remove/uvm_unmap (mainonly boolean arg
	has been removed)
 - replace UVM_ET_ISMAP checks with UVM_ET_ISSUBMAP checks
1998-10-11 23:18:20 +00:00
thorpej 7cad30cd22 Add a couple of comments about how the pool page allocator functions
can be called with a map that doens't require spl protection.
1998-08-28 21:16:23 +00:00
thorpej 77d0a69569 Add a waitok boolean argument to the VM system's pool page allocator backend. 1998-08-28 20:05:48 +00:00
eeh a2dd74ed79 Merge paddr_t changes into the main branch. 1998-08-13 02:10:37 +00:00
perry 2c8717021d bzero->memset, bcopy->memcpy, bcmp->memcmp 1998-08-09 22:36:37 +00:00
thorpej 45d17e02f7 We need to be able to specify a uvm_object to the pool page allocator, too. 1998-08-01 01:39:03 +00:00
thorpej 55bf1fd9ad Allow an alternate splimp-protected map to be specified in the pool page
allocator routines.
1998-07-31 20:46:36 +00:00
thorpej 8325d058bf Implement uvm_km_{alloc,free}_poolpage(). These functions use pmap hooks to
map/unmap pool pages if provided by the pmap layer.
1998-07-24 20:28:48 +00:00
chs a5550009e6 correct counting for uvmexp.wired:
only pages explicitly wired by a user process should be counted.
1998-06-09 05:18:52 +00:00
mrg 8106d13596 KNF. 1998-03-09 00:58:55 +00:00
chuck cbd05b1537 be consistent about offsets in kernel objects. vm_map_min(kernel_map)
should always be the base [fixes problem on m68k detected by jason thorpe]

add comments to uvm_km.c explaining kernel memory management in more detail
1998-02-24 15:58:09 +00:00
mrg d90485202c - add defopt's for UVM, UVMHIST and PMAP_NEW.
- remove unnecessary UVMHIST_DECL's.
1998-02-10 14:08:44 +00:00
thorpej 1305ecbe62 Allow callers of uvm_km_suballoc() to specify where the base of the
submap _must_ begin, by adding a "fixed" boolean argument.
1998-02-08 06:15:53 +00:00
mrg 1f6b921cf7 restore rcsids 1998-02-07 11:07:38 +00:00
chs c82ac447df convert kernel_object to an aobj.
in uvm_km_pgremove(), free swapslots if the object is an aobj.
in uvm_km_kmemalloc(), mark pages as wired and count them.
1998-02-07 02:29:21 +00:00
thorpej 9eb328b495 RCS ID police. 1998-02-06 22:26:13 +00:00
mrg f2caacc717 initial import of the new virtual memory system, UVM, into -current.
UVM was written by chuck cranor <chuck@maria.wustl.edu>, with some
minor portions derived from the old Mach code.  i provided some help
getting swap and paging working, and other bug fixes/ideas.  chuck
silvers <chuq@chuq.com> also provided some other fixes.

this is the UVM kernel code portion.


this will be KNF'd shortly.  :-)
1998-02-05 06:25:08 +00:00