Commit Graph

431 Commits

Author SHA1 Message Date
mrg
1f4b948b2f move the contents of <vm/vm.h> into <uvm/uvm_extern.h>. <vm/vm.h> is simply
an include of <uvm/uvm_extern.h> now.
2000-06-27 16:16:43 +00:00
mrg
88adda1288 more vm header file changes:
<vm/vm_extern.h> merged into <uvm/uvm_extern.h>
	<vm/vm_page.h> merged into <uvm/uvm_page.h>
	<vm/pmap.h> has become <uvm/uvm_pmap.h>

this leaves just <vm/vm.h> in NetBSD.
2000-06-27 09:00:14 +00:00
mrg
e185413725 remove redudant <vm/pmap.h> includes. <vm/pmap.h> -> <uvm/uvm_pmap.h> 2000-06-27 04:18:48 +00:00
mrg
7665599771 <vm/vm_map.h> gets merged into <uvm/uvm_map.h> 2000-06-26 15:32:23 +00:00
mrg
4c698e84f6 <vm/vm_param.h> -> <uvm/uvm_param.h> 2000-06-26 14:58:58 +00:00
mrg
2f159a1bac remove/move more mach vm header files:
<vm/pglist.h> -> <uvm/uvm_pglist.h>
	<vm/vm_inherit.h> -> <uvm/uvm_inherit.h>
	<vm/vm_kern.h> -> into <uvm/uvm_extern.h>
	<vm/vm_object.h> -> nothing
	<vm/vm_pager.h> -> into <uvm/uvm_pager.h>

also includes a bunch of <vm/vm_page.h> include removals (due to redudancy
with <vm/vm.h>), and a scattering of other similar headers.
2000-06-26 14:20:25 +00:00
mrg
53be5b215c <vm/vm_pageout.h> is already empty; kill it totally. 2000-06-25 13:49:33 +00:00
mrg
f5f84f80c5 <vm/vm_prot.h> becomes <uvm/uvm_prot.h> 2000-06-25 13:37:51 +00:00
thorpej
e03e9e8086 Rather than starting init and creating kthreads by forking and then
doing a cpu_set_kpc(), just pass the entry point and argument all
the way down the fork path starting with fork1().  In order to
avoid special-casing the normal fork in every cpu_fork(), MI code
passes down child_return() and the child process pointer explicitly.

This fixes a race condition on multiprocessor systems; a CPU could
grab the newly created processes (which has been placed on a run queue)
before cpu_set_kpc() would be performed.
2000-05-28 05:48:59 +00:00
thorpej
9ec517a68e Changes necessary to implement pre-zero'ing of pages in the idle loop:
- Make page free lists have two actual queues: known-zero pages and
  pages with unknown contents.
- Implement uvm_pageidlezero().  This function attempts to zero up to
  the target number of pages until the target has been reached (currently
  target is `all free pages') or until whichqs becomes non-zero (indicating
  that a process is ready to run).
- Define a new hook for the pmap module for pre-zero'ing pages.  This is
  used to zero the pages using uncached access.  This allows us to zero
  as many pages as we want without polluting the cache.

In order to use this feature, each platform must add the appropropriate
glue in their idle loop.
2000-04-24 17:12:00 +00:00
mrg
6b7f13609a remove <vm/vm_swap.h> and <vm/vm_conf.h> 2000-04-15 18:08:12 +00:00
chs
43003128d4 tidy. 2000-04-11 02:30:32 +00:00
simonb
4471772be4 Multiple include protection. 2000-03-29 03:41:07 +00:00
kleink
6e5b64c8a0 Merge parts of chs-ubc2 into the trunk:
Add a new type voff_t (defined as a synonym for off_t) to describe offsets
into uvm objects, and update the appropriate interfaces to use it, the
most visible effect being the ability to mmap() file offsets beyond
the range of a vaddr_t.

Originally by Chuck Silvers; blame me for problems caused by merging this
into non-UBC.
2000-03-26 20:54:45 +00:00
kleink
230876cf26 Merge parts of chs-ubc2 into the trunk:
* Remove the casts to vaddr_t from the round_page() and trunc_page() macros to
  make them type-generic, which is necessary i.e. to operate on file offsets
  without truncating them.
* In due course, cast pointer arguments to these macros to an appropriate
  integral type (paddr_t, vaddr_t).

Originally done by Chuck Silvers, updated by myself.
2000-03-26 20:42:21 +00:00
soda
83e4ae0975 second and third argument of pmap_steal_memory() are not paddr_t*, but vaddr_t*. 2000-03-21 09:33:45 +00:00
ragge
78609773fc Allow PAGE_SIZE et al to be defined as constants instead of variables. 2000-03-04 08:41:59 +00:00
thorpej
eb9cbbe294 Add some very simple code to auto-size the kmem_map. We take the
amount of physical memory, divide it by 4, and then allow machine
dependent code to place upper and lower bounds on the size.  Export
the computed value to userspace via the new "vm.nkmempages" sysctl.

NKMEMCLUSTERS is now deprecated and will generate an error if you
attempt to use it.  The new option, should you choose to use it,
is called NKMEMPAGES, and two new options NKMEMPAGES_MIN and
NKMEMPAGES_MAX allow the user to configure the bounds in the kernel
config file.
2000-02-11 19:22:52 +00:00
eeh
c0ac678704 I should have made uvm_page_physload() take paddr_t's instead of vaddr_t's.
Also, add uvm_coredump32().
1999-12-30 16:09:47 +00:00
thorpej
fd9aa8fbba Cast the argument to ptoa() to a vaddr_t to prevent an integer overflow
from occurring on systems with more than 2G of RAM.
1999-11-30 19:31:05 +00:00
thorpej
1da427a80a Change the pmap_enter() API slightly; pmap_enter() now returns an error
value (KERN_SUCCESS or KERN_RESOURCE_SHORTAGE) indicating if it succeeded
or failed.  Change the `wired' and `access_type' arguments to a single
`flags' argument, which includes the access type, and flags:

	PMAP_WIRED	the old `wired' boolean
	PMAP_CANFAIL	pmap_enter() is allowed to fail

If PMAP_CANFAIL is not specified, the pmap should behave as it always
has in the face of a drastic resource shortage: fall over dead.

Change the fault handler to deal with failure (which indicates resource
shortage) by unlocking everything, waiting for the pagedaemon to free
more memory, then retrying the fault.
1999-11-13 00:24:38 +00:00
chs
f3a668ed84 eliminate the PMAP_NEW option by making it required for all ports.
ports which previously had no support for PMAP_NEW now implement
the pmap_k* interfaces as wrappers around the non-k versions.
1999-09-12 01:16:55 +00:00
thorpej
3f176180d5 Garbage collect thread_sleep()/thread_wakeup() left over from the old
Mach VM code.  Also nuke iprintf(), which was no longer used anywhere.

Add proclist locking where appropriate.
1999-07-22 22:58:38 +00:00
thorpej
3ebbe095e0 Change the pmap_extract() interface to:
boolean_t pmap_extract(pmap_t, vaddr_t, paddr_t *);
This makes it possible for the pmap to map physical address 0.
1999-07-08 18:05:21 +00:00
thorpej
ad1a5ef5cf Add a macro to modify flags in a VM map, which handles the locking
for you.
1999-07-07 05:33:33 +00:00
thorpej
11c67d01a5 Fix a corner case locking error, which could lead to map corruption in
SMP environments.  See comments in <vm/vm_map.h> for details.
1999-07-01 20:07:05 +00:00
thorpej
0288ffb53a pmap_change_wiring() -> pmap_unwire(). 1999-06-17 19:23:20 +00:00
thorpej
f5a527bb4e Remove pmap_pageable(); no pmap implements it, and it is not really useful,
because pmap_enter()/pmap_change_wiring() (soon to be pmap_unwire())
communicate the information in greater detail.
1999-06-17 18:21:21 +00:00
thorpej
26879cb8fb Clean up some comments. 1999-06-16 17:43:49 +00:00
thorpej
ee9703dea9 Add a macro to test if a map entry is wired. 1999-06-16 00:29:04 +00:00
thorpej
c5a43ae10c Several changes, developed and tested concurrently:
* Provide POSIX 1003.1b mlockall(2) and munlockall(2) system calls.
  MCL_CURRENT is presently implemented.  MCL_FUTURE is not fully
  implemented.  Also, the same one-unlock-for-every-lock caveat
  currently applies here as it does to mlock(2).  This will be
  addressed in a future commit.
* Provide the mincore(2) system call, with the same semantics as
  Solaris.
* Clean up the error recovery in uvm_map_pageable().
* Fix a bug where a process would hang if attempting to mlock a
  zero-fill region where none of the pages in that region are resident.
  [ This fix has been submitted for inclusion in 1.4.1 ]
1999-06-15 23:27:47 +00:00
thorpej
ec9fd3b48c Fix a braino in vm_map_unlock(). Thanks to Chuck Silvers for pointing
out that there was a problem, and for sending me a trace.
1999-06-07 16:34:04 +00:00
thorpej
397b71aaca Fix a typo. 1999-06-07 15:25:19 +00:00
thorpej
f421cc0851 Keep interrupt-safe maps on an additional queue. In uvm_fault(), if we're
looking up a kernel address, check to see if the address is on this
"interrupt-safe" list.  If so, return failure immediately.  This prevents
a locking screw if a page fault is taken on an interrupt-safe map in or
out of interrupt context.
1999-06-05 04:12:31 +00:00
thorpej
108b13d5a9 Make "intrsafe" maps locked only by exclusive spin locks, never sleep
locks (and thus, never shared locks).  Move the "set/clear recursive"
functions to uvm_map.c, which is the only placed they're used (and
they should go away anyhow).  Delete some unused cruft.
1999-05-28 20:31:42 +00:00
thorpej
2580d306ab Change the vm_map's "entries_pageable" member to a r/o flags member, which
has PAGEABLE and INTRSAFE flags.  PAGEABLE now really means "pageable",
not "allocate vm_map_entry's from non-static pool", so update all map
creations to reflect that.  INTRSAFE maps are maps that are used in
interrupt context (e.g. kmem_map, mb_map), and thus use the static
map entry pool (XXX as does kernel_map, for now).  This will eventually
change now these maps are locked, as well.
1999-05-26 19:16:28 +00:00
thorpej
96993ef749 Make a slight modification of pmap_growkernel() -- it now returns the
end of the mappable kernel virtual address space.  Previously, it would
get called more often than necessary, because the caller only new what
was requested.

Also, export uvm_maxkaddr so that uvm_pageboot_alloc() can grow the
kernel pmap if necessary, as well.  Note that pmap_growkernel() must
now be able to handle being called before pmap_init().
1999-05-20 23:25:42 +00:00
thorpej
c10a926030 Allow the caller to specify a stack for the child process. If NULL,
the child inherits the stack pointer from the parent (traditional
behavior).  Like the signal stack, the stack area is secified as
a low address and a size; machine-dependent code accounts for stack
direction.

This is required for clone(2).
1999-05-13 21:58:32 +00:00
thorpej
f5108f64e7 Add an optional pmap hook, pmap_fork(), to be called at the end of
uvmspace_fork().

pmap_fork() is used to "fork a pmap", that is copy data from one pmap
to the other that is NOT related to actual mappings in the pmap, but is
otherwise logically coupled to the address space.
1999-05-12 19:11:23 +00:00
simonb
e3a5c2451c Check to see if TRUE and FALSE are already defined before blindly
trying to define them ourselves.

Fixes PRs kern/2813 and misc/7356.
1999-04-11 00:59:07 +00:00
drochner
d79282e281 rip out some old-vm specific definitions and prototypes for address space
sanity
1999-04-10 13:52:11 +00:00
mrg
920344b9c2 put back #ifdef _KERNEL that broken xosview build. 1999-03-31 12:29:51 +00:00
mycroft
e7ad33406b Nuke at least a few files which are clearly not used any more. 1999-03-30 13:08:55 +00:00
mycroft
31a2536cd0 Add a new `access type' argument to pmap_enter(). This indicates what type of
memory access a mapping was caused by.  This is passed through from uvm_fault()
and udv_fault(), and in most other cases is 0.
The pmap module may use this to preset R/M information.  On MMUs which require
R/M emulation, the implementation may preset the bits and avoid taking another
fault.  On MMUs which keep R/M information in hardware, the implementation may
preset its cached bits to speed up the next call to pmap_is_modified() or
pmap_is_referenced().
1999-03-26 21:58:39 +00:00
mrg
d2397ac5f7 completely remove Mach VM support. all that is left is the all the
header files as UVM still uses (most of) these.
1999-03-24 05:50:49 +00:00
kleink
b0fe22c29d Have unimplemented/unsupported system calls (madvise(), mincore(), sbrk(),
sstk()) fail with ENOSYS.
1999-03-09 12:18:22 +00:00
hubertf
dcd5f59c0a RCS ID police 1999-02-15 04:14:54 +00:00
chuck
d22647b17e remove old (dead) non-MNN code 1999-01-16 20:00:28 +00:00
cgd
2690e920cb patch from chuck:
remove bogus permission check in uvm_map_clean().   fixes mmap/msync
problem discussed/reported by jonathan and Andreas Wrede <andreas@planix.com>.
1998-11-29 06:15:58 +00:00
mrg
b4f6a27f69 comment how a diag test is possible broken. 1998-11-19 05:23:46 +00:00