Commit Graph

332 Commits

Author SHA1 Message Date
thorpej
1079b4c0ac - Avoid an integer overflow when checking if we have exceeded our
rlimit in sbrk.  Slightly modified from a patch from Artur Grabowski.
- Rearrange code slightly, partially from Artur Grabowski.
- Only adjust vm_dsize if the grow or shrink actually succeeds.
2000-07-02 17:40:08 +00:00
mrg
dea44a9ec4 remove include of <vm/vm.h> 2000-06-27 17:29:17 +00:00
mrg
1f4b948b2f move the contents of <vm/vm.h> into <uvm/uvm_extern.h>. <vm/vm.h> is simply
an include of <uvm/uvm_extern.h> now.
2000-06-27 16:16:43 +00:00
mrg
88adda1288 more vm header file changes:
<vm/vm_extern.h> merged into <uvm/uvm_extern.h>
	<vm/vm_page.h> merged into <uvm/uvm_page.h>
	<vm/pmap.h> has become <uvm/uvm_pmap.h>

this leaves just <vm/vm.h> in NetBSD.
2000-06-27 09:00:14 +00:00
mrg
cd9f783cb9 install uvm_pmap.h 2000-06-27 08:49:44 +00:00
simonb
eeff58b5fd In udv_fault(), use an off_t for curr_offset so that the offset passed
to d_mmap isn't truncated on 64 bit architectures.
2000-06-27 06:14:24 +00:00
mrg
6b5536f253 restore a dropped #ifdef _KERNEL 2000-06-26 17:18:40 +00:00
mrg
acdc45ce9a install uvm_param.h. 2000-06-26 17:01:34 +00:00
mrg
7665599771 <vm/vm_map.h> gets merged into <uvm/uvm_map.h> 2000-06-26 15:32:23 +00:00
mrg
4c698e84f6 <vm/vm_param.h> -> <uvm/uvm_param.h> 2000-06-26 14:58:58 +00:00
mrg
2f159a1bac remove/move more mach vm header files:
<vm/pglist.h> -> <uvm/uvm_pglist.h>
	<vm/vm_inherit.h> -> <uvm/uvm_inherit.h>
	<vm/vm_kern.h> -> into <uvm/uvm_extern.h>
	<vm/vm_object.h> -> nothing
	<vm/vm_pager.h> -> into <uvm/uvm_pager.h>

also includes a bunch of <vm/vm_page.h> include removals (due to redudancy
with <vm/vm.h>), and a scattering of other similar headers.
2000-06-26 14:20:25 +00:00
simonb
889c658b5b Change the kernel mmap interface so that the offset to map is an
"off_t" and the return value is a "paddr_t" to allow mappings
at offsets past 2^31 bytes.  Somewhat inspired by FreeBSD, which
only changed the offset to a "vm_offset_t".

Includes updates for the i386, pc532 and sh3 mmmmap from Jason Thorpe.
2000-06-26 04:55:19 +00:00
mrg
f5f84f80c5 <vm/vm_prot.h> becomes <uvm/uvm_prot.h> 2000-06-25 13:37:51 +00:00
pk
d3aef3ad38 uvm_detach: eliminate degenerate loop construction. 2000-06-24 21:47:28 +00:00
pk
bdee69596e Insert two missing `simple_unlock()'s' in udv_detach(). 2000-06-24 21:26:16 +00:00
simonb
58d0ed9dd2 Set p->p_addr to NULL after it gets freed. 2000-06-18 05:20:27 +00:00
chs
e72214422a initialize aref.ar_pageoff even if there's no amap. 2000-06-13 04:10:47 +00:00
soda
d5b3fb3ce1 fix printf format mismatch, when paddr_t becomes (long long) on arc port. 2000-06-09 04:43:19 +00:00
thorpej
b0afc900f5 Change UVM_UNLOCK_AND_WAIT() to use ltsleep() (it is now atomic, as
advertised).  Garbage-collect uvm_sleep().
2000-06-08 05:52:34 +00:00
pk
36a1354bc6 Change previous to use `vm_map_min(dstmap)' instead of hard-coding
VM_MIN_KERNEL_ADDRESS.
2000-06-05 07:28:56 +00:00
pk
51ff5f7cd1 Let uvm_map_extract() set the lower bound on the kernel address range
itself, in stead of having its callers do that.
2000-06-02 12:02:43 +00:00
pk
bf3a6b350b Shouldn't pass garbage to uvm_map_extract(). 2000-06-02 11:47:53 +00:00
thorpej
65184f2470 Change the comment before the vm_page_zero_enable global to indicate
what it will now be used for.
2000-05-29 19:25:56 +00:00
drochner
f8a6b48d66 Don't silently truncate the voff_t offset to vaddr_t when passing it to
udv_attach. Pass the whole voff_t instead and do an explicite overflow
check before it is passed to the device's mmap handler (as "int", sadly).
2000-05-28 10:21:55 +00:00
thorpej
e03e9e8086 Rather than starting init and creating kthreads by forking and then
doing a cpu_set_kpc(), just pass the entry point and argument all
the way down the fork path starting with fork1().  In order to
avoid special-casing the normal fork in every cpu_fork(), MI code
passes down child_return() and the child process pointer explicitly.

This fixes a race condition on multiprocessor systems; a CPU could
grab the newly created processes (which has been placed on a run queue)
before cpu_set_kpc() would be performed.
2000-05-28 05:48:59 +00:00
thorpej
a7d0570e67 First sweep at scheduler state cleanup. Collect MI scheduler
state into global and per-CPU scheduler state:

	- Global state: sched_qs (run queues), sched_whichqs (bitmap
	  of non-empty run queues), sched_slpque (sleep queues).
	  NOTE: These may collectively move into a struct schedstate
	  at some point in the future.

	- Per-CPU state, struct schedstate_percpu: spc_runtime
	  (time process on this CPU started running), spc_flags
	  (replaces struct proc's p_schedflags), and
	  spc_curpriority (usrpri of processes on this CPU).

	- Every platform must now supply a struct cpu_info and
	  a curcpu() macro.  Simplify existing cpu_info declarations
	  where appropriate.

	- All references to per-CPU scheduler state now made through
	  curcpu().  NOTE: this will likely be adjusted in the future
	  after further changes to struct proc are made.

Tested on i386 and Alpha.  Changes are mostly mechanical, but apologies
in advance if it doesn't compile on a particular platform.
2000-05-26 21:19:19 +00:00
thorpej
8964c35eca Introduce a new process state distinct from SRUN called SONPROC
which indicates that the process is actually running on a
processor.  Test against SONPROC as appropriate rather than
combinations of SRUN and curproc.  Update all context switch code
to properly set SONPROC when the process becomes the current
process on the CPU.
2000-05-26 00:36:42 +00:00
enami
332c98526a - Move the comment, which describes that calling the function
uvm_map_pageable(map, ...) implies unlocking passed map, just before the
  function call.
- If we bail out before calling the uvm_map_pageable, unlock the map
  by ourself to prevent a panic ``locking against myself''.  The panic is,
  for example, caused when cdrecord is invoked with too large fifo size.
2000-05-23 02:19:20 +00:00
thorpej
1410091b4e Clean up a comment. 2000-05-20 19:54:01 +00:00
thorpej
cd737c4016 Remove VM_PROT_EXECUTE from the permissions used to map the page
for pager I/O -- it is not needed, and including it leads to
unnecessary I-cache flushes.
2000-05-20 03:36:06 +00:00
thorpej
646555bbd5 Clean up some indentation lossage in uvm_map_extract(). 2000-05-19 17:43:55 +00:00
thorpej
f636538446 NULL != 0 2000-05-19 04:34:39 +00:00
thorpej
655b21e17d Tell uvm_pagermapin() the direction of the I/O so that it can map
with only the protection that it needs.
2000-05-19 03:45:04 +00:00
thorpej
f3b078d268 __predict_false() an error check. 2000-05-08 23:13:42 +00:00
thorpej
d45c9982df __predict_false() DIAGNOSTIC error checks. 2000-05-08 23:11:53 +00:00
thorpej
39294d89e5 __predict_false() out-of-resource conditions and DIAGNOSTIC error checks. 2000-05-08 23:10:20 +00:00
thorpej
b5b82faa4a uvm_map_setup(): We almost ever set up an interrupt-safe map, but we
set up quite a few regular ones (at every fork!), so put interrupt-
safe map setup in the slow path with a __predict_false().

uvm_map_reference(): __predict_false() the check for NULL map.
uvm_map_deallocate(): Likewise.
2000-05-08 22:59:35 +00:00
thorpej
9ec517a68e Changes necessary to implement pre-zero'ing of pages in the idle loop:
- Make page free lists have two actual queues: known-zero pages and
  pages with unknown contents.
- Implement uvm_pageidlezero().  This function attempts to zero up to
  the target number of pages until the target has been reached (currently
  target is `all free pages') or until whichqs becomes non-zero (indicating
  that a process is ready to run).
- Define a new hook for the pmap module for pre-zero'ing pages.  This is
  used to zero the pages using uncached access.  This allows us to zero
  as many pages as we want without polluting the cache.

In order to use this feature, each platform must add the appropropriate
glue in their idle loop.
2000-04-24 17:12:00 +00:00
chs
d444bb4032 undo rev 1.13, which is to say, don't block interrupts while deactivating
one pmap and activating another.  this isn't actually necessary (since
pmap_activate() and pmap_deactivate() affect only user-level mappings,
which cannot be accessed from interrupts anyway), and pmap_activate()
is very slow on old sun4c sparcs so we can't block interrupts for this long.
this fixes PR 8322.
2000-04-16 20:52:29 +00:00
mrg
6b7f13609a remove <vm/vm_swap.h> and <vm/vm_conf.h> 2000-04-15 18:08:12 +00:00
pk
741c324930 Finish previous. 2000-04-11 08:12:14 +00:00
chs
8724ec3b5c avoid declarating "i" as a local variable in a macro.
it's too easy to shadow another local.
2000-04-11 02:34:19 +00:00
chs
66014d2dff sparc -> __sparc__
print lock status in uvm_object_printit().
2000-04-10 02:21:26 +00:00
chs
061ecbff46 tidy. 2000-04-10 02:20:06 +00:00
thorpej
eeb3a38cfc Use UVM_PGA_ZERO in the promote-zero-fault case of uvm_fault(). 2000-04-10 01:17:41 +00:00
thorpej
345b3d2136 Use UVM_PGA_ZERO in a few (easy) places. 2000-04-10 00:32:46 +00:00
thorpej
2c48131727 Add UVM_PGA_ZERO which instructs uvm_pagealloc{,_strat}() to return a
zero'd, ! PG_CLEAN page, as if it were uvm_pagezero()'d.
2000-04-10 00:28:05 +00:00
chs
d75d6fb164 restore a brelvp() that I removed in a moment of overzealousness.
Debugged by:  Brian Grayson <bgrayson@netbsd.org>
2000-04-07 08:27:28 +00:00
chs
60a7a67f71 remove uvm_shareprot(). no longer needed since the demise of share maps. 2000-04-03 08:09:02 +00:00
chs
03a4ef3a79 remove the "shareprot" pagerop. it's not needed anymore since
share maps are long gone.
2000-04-03 07:35:23 +00:00