the mapping is:
VM_PAGER_OK 0
VM_PAGER_BAD <unused>
VM_PAGER_FAIL <unused>
VM_PAGER_PEND 0 (see below)
VM_PAGER_ERROR EIO
VM_PAGER_AGAIN EAGAIN
VM_PAGER_UNLOCK EBUSY
VM_PAGER_REFAULT ERESTART
for async i/o requests, it used to be possible for the request to
be convert to sync, and the pager would return VM_PAGER_OK or VM_PAGER_PEND
to indicate whether the caller should perform post-i/o cleanup.
this is no longer allowed; pagers must now return 0 to indicate that
the async i/o was successfully started, and the caller never needs to
worry about doing the post-i/o cleanup.
each of the basic types (anonymous data, executable image, cached files)
and prevent the pagedaemon from reusing a given page if that would reduce
the count of that type of page below a sysctl-setable minimum threshold.
the thresholds are controlled via three new sysctl tunables:
vm.anonmin, vm.vnodemin, and vm.vtextmin. these tunables are the
percentages of pageable memory reserved for each usage, and we do not allow
the sum of the minimums to be more than 95% so that there's always some
memory that can be reused.
failed because we failed to acquire some resource needed to initiate
the pageout (such as failing to lock an indirect buffer) rather than
a hard i/o error. in this case we just want to reactivate the page(s)
so that we'll try to write them again later.
while I'm here, clean up some DIAGNOSTIC code.
space is already torn down in uvmspace_free() when the vmspace
refrence count reaches 0. Move the shmexit() call into uvmspace_free().
Note that there is a beneficial side-effect of deferring the unmap
to uvmspace_free() -- on systems where TLB invalidations are
particularly expensive, the unmapping of the address space won't
have to cause TLB invalidations; uvmspace_free() is going to be
run in a context other than the exiting process's, so the "pmap is
active" test will evaluate to FALSE in the pmap module.
entry in the map. the old code would walk around the end of the linked list,
through the header entry, and keep going from the first map entry until it
found a gap in the map, at which point it would return an error. if the map
had no gaps then it would loop forever. reported by k-abe@cs.utah.edu.
while I'm here, clean up this function a bit.
also, use MIN() instead of min(), since the latter takes arguments of
type "int" but we're passing it values of type "vaddr_t", which can be
a larger size.
Mach VM's now. Specific changes:
- Pages now need not have all of their mappings removed before being
put on the inactive list. They only need to have the "referenced"
attribute cleared. This makes putting pages onto the inactive list
much more efficient. In order to eliminate redundant clearings of
"refrenced", callers of uvm_pagedeactivate() must now do this
themselves.
- When checking the "modified" attribute for a page (for clearing
PG_CLEAN), make sure to only do it if PG_CLEAN is currently set on
the page (saves a potentially expensive pmap operation).
- When scanning the inactive list, if a page is referenced, reactivate
it (this part was actually added in uvm_pdaemon.c,v 1.27). This
now works properly now that pages on the inactive list are allowed to
have mappings.
- When scanning the inactive list and considering a page for freeing,
remove all mappings, and then check the "modified" attribute if the
page is marked PG_CLEAN.
- When scanning the active list, if the page was referenced since its
last sweep by the scanner, don't deactivate it. (This part was
actually added in uvm_pdaemon.c,v 1.28.)
These changes greatly improve interactive performance during
moderate to high memory and I/O load.
amap_free(): Assert that the amap is locked.
amap_share_protect(): Assert that the amap is locked.
amap_wipeout(): Assert that the amap is locked.
uvm_anfree(): Assert that the anon has a reference count of 0 and is
not locked.
uvm_anon_lockloanpg(): Assert that the anon is locked.
anon_pagein(): Assert that the anon is locked.
uvmfault_anonget(): Assert that the anon is locked.
uvm_pagealloc_strat(): Assert that the uobj or the anon is locked
And fix the problems these have uncovered:
amap_cow_now(): Lock the new anon after allocating it, and unref and
unlock it (rather than lock!) before freeing it in case
of an error condition. This should fix a problem reported
by Dan Carosone using cdrecord on an i386 MP kernel.
uvm_fault(): Case1B -- Lock the new anon afer allocating it, and unlock
it later when we unlock the old anon.
Case2 -- Lock the new anon after allocating it, and unlock
it later by passing it to uvmfault_unlockall() (we set anon
to NULL if we're not doing a promote fault).
pending i/os to complete before returning even if PGO_CLEANIT is not
specified. this fixes two races:
(1) NFS write rpcs vs. setattr operations which truncate the file.
if the truncate doesn't wait for pending writes to complete then
a later write rpc completion can undo the effect of the truncate.
this problem has been reported by several people.
(2) write i/os in disk-based filesystem vs. the disk block being
freed by a truncation, allocated to a new file, and written
again with different data. if the disk driver reorders the requests
and does the second i/o first, the old data will clobber the new,
corrupting the new file. I haven't heard of anyone experiencing
this problem yet, but it's fixed now anyway.
doesn't have the exec bit set, we need to have PROT_EXEC set
in order for some expected mmap/mprotect behavior to work, so
do the last bit slightly differently: if udv_attach() fails, and
the protection (NOT maxprot) doens't include PROT_EXEC, then clear
PROT_EXEC from maxprot and try udv_attach() again.
Sigh, mmap really needs to be rototilled.
in the mmap() call. maxprot is used to create device mappings,
and always including PROT_EXEC causes the mapping to fail on the Alpha
when mapping a non-RAM offset of /dev/mem (which may be sparse, so
instruction fetch from there is disallowed).
use queue.h macros and KASSERT().
address amap offsets in pages instead of bytes.
make amap_ref() and amap_unref() take an amap, offset and length
instead of a vm_map_entry_t.
improve whitespace and comments.
devices will actually be notified if this is the last close.
this allows raidframe swap devices to be marked clean.
also, move the corresponding vref() into swap_on() for symmetry
and improve some comments.
it and free it as appropriate. Activate p2's new address space once
it references p1's.
- uvm_fork(): Make sure the child's vmspace is NULL before calling
uvmspace_share() (the child doens't have one already in this case).
These changes do not change the behavior for the current use of
uvmspace_share() (vfork(2)), but make it possible for an already
running process (such as a kernel thread) to properly attach to
another process's address space.
to the contents of the hint in the map, and the hint saved in the
map only if the two values match. When an unconditional save is
required, the "check" value passed should be map->hint (and the
compiler will optimize the test away). When deleting a map entry,
the new SAVE_HINT() will only change the hint if the entry being
deleted was the hint value (thus preserving any meaningful hint
that may have been there previously, rather than stomping on it).
- Add a missing hint update when deleting the map entry in
uvm_map_entry_unlink(). This is the fix for kern/11125, from
ITOH Yasufumi <itohy@netbsd.org>.
`struct vmspace' has a new field `vm_minsaddr' which is the user TOS.
PS_STRINGS is deprecated in favor of curproc->p_pstr which is derived
from `vm_minsaddr'.
Bump the kernel version number.
that the page being zero'd was not completed and that page zeroing
should be aborted. This may be used by machine-dependent code doing
slow page access to reduce the latency of running a process that has
become runnable while in the middle of doing a slow page zero.
routine. Works similarly fto pmap_prefer(), but allows callers
to specify a minimum power-of-two alignment of the region.
How we ever got along without this for so long is beyond me.
When it wasn't (which could happen on a 4Mb machine with 32kb pages),
uvm_pagealloc_strat could refuse to allocate user memory, while the pagedaemon
didn't think it was worth freeing any more, resulting in the system seizing up.
rlimit in sbrk. Slightly modified from a patch from Artur Grabowski.
- Rearrange code slightly, partially from Artur Grabowski.
- Only adjust vm_dsize if the grow or shrink actually succeeds.
<vm/vm_extern.h> merged into <uvm/uvm_extern.h>
<vm/vm_page.h> merged into <uvm/uvm_page.h>
<vm/pmap.h> has become <uvm/uvm_pmap.h>
this leaves just <vm/vm.h> in NetBSD.
<vm/pglist.h> -> <uvm/uvm_pglist.h>
<vm/vm_inherit.h> -> <uvm/uvm_inherit.h>
<vm/vm_kern.h> -> into <uvm/uvm_extern.h>
<vm/vm_object.h> -> nothing
<vm/vm_pager.h> -> into <uvm/uvm_pager.h>
also includes a bunch of <vm/vm_page.h> include removals (due to redudancy
with <vm/vm.h>), and a scattering of other similar headers.
"off_t" and the return value is a "paddr_t" to allow mappings
at offsets past 2^31 bytes. Somewhat inspired by FreeBSD, which
only changed the offset to a "vm_offset_t".
Includes updates for the i386, pc532 and sh3 mmmmap from Jason Thorpe.
doing a cpu_set_kpc(), just pass the entry point and argument all
the way down the fork path starting with fork1(). In order to
avoid special-casing the normal fork in every cpu_fork(), MI code
passes down child_return() and the child process pointer explicitly.
This fixes a race condition on multiprocessor systems; a CPU could
grab the newly created processes (which has been placed on a run queue)
before cpu_set_kpc() would be performed.
state into global and per-CPU scheduler state:
- Global state: sched_qs (run queues), sched_whichqs (bitmap
of non-empty run queues), sched_slpque (sleep queues).
NOTE: These may collectively move into a struct schedstate
at some point in the future.
- Per-CPU state, struct schedstate_percpu: spc_runtime
(time process on this CPU started running), spc_flags
(replaces struct proc's p_schedflags), and
spc_curpriority (usrpri of processes on this CPU).
- Every platform must now supply a struct cpu_info and
a curcpu() macro. Simplify existing cpu_info declarations
where appropriate.
- All references to per-CPU scheduler state now made through
curcpu(). NOTE: this will likely be adjusted in the future
after further changes to struct proc are made.
Tested on i386 and Alpha. Changes are mostly mechanical, but apologies
in advance if it doesn't compile on a particular platform.
which indicates that the process is actually running on a
processor. Test against SONPROC as appropriate rather than
combinations of SRUN and curproc. Update all context switch code
to properly set SONPROC when the process becomes the current
process on the CPU.
uvm_map_pageable(map, ...) implies unlocking passed map, just before the
function call.
- If we bail out before calling the uvm_map_pageable, unlock the map
by ourself to prevent a panic ``locking against myself''. The panic is,
for example, caused when cdrecord is invoked with too large fifo size.
set up quite a few regular ones (at every fork!), so put interrupt-
safe map setup in the slow path with a __predict_false().
uvm_map_reference(): __predict_false() the check for NULL map.
uvm_map_deallocate(): Likewise.
- Make page free lists have two actual queues: known-zero pages and
pages with unknown contents.
- Implement uvm_pageidlezero(). This function attempts to zero up to
the target number of pages until the target has been reached (currently
target is `all free pages') or until whichqs becomes non-zero (indicating
that a process is ready to run).
- Define a new hook for the pmap module for pre-zero'ing pages. This is
used to zero the pages using uncached access. This allows us to zero
as many pages as we want without polluting the cache.
In order to use this feature, each platform must add the appropropriate
glue in their idle loop.
one pmap and activating another. this isn't actually necessary (since
pmap_activate() and pmap_deactivate() affect only user-level mappings,
which cannot be accessed from interrupts anyway), and pmap_activate()
is very slow on old sun4c sparcs so we can't block interrupts for this long.
this fixes PR 8322.
uvm_page_init() has completed, add a boolean uvm.page_init_done,
and test against that. Use this same boolean (rather than
pmap_initialized) in pmap_growkernel() to determine if we are
being called via uvm_page_init() to grow the kernel address space.
This fixes a problem on some i386 configurations where pmap_init()
itself was needing to have the kernel page table grown, and since
pmap_initialized was not yet set to TRUE, pmap_growkernel() was
choosing the wrong code path.
Fix tested by Havard Eidnes.