- rather than treating MAP_COPY like MAP_PRIVATE by sheer virtue of it not
being MAP_SHARED, actually convert the MAP_COPY flag into MAP_PRIVATE.
- return EINVAL if MAP_SHARED and MAP_PRIVATE are both included in flags.
which use uvm_vslock() should now test the return value. If it's not
KERN_SUCCESS, wiring the pages failed, so the operation which is using
uvm_vslock() should error out.
XXX We currently just EFAULT a failed uvm_vslock(). We may want to do
more about translating error codes in the future.
Don't (ab)use uvm_map_pageable() to allocate PT pages. Instead, do
some internal reference counting on PT pages. We still allocate them
with the page fault routine (a wire-fault, now), but no longer free
PT pages from pmap_pageable().
Anyway, just because a drive doesn't support the LOAD (to BOT) command does
not mean that the drive doesn't support the UNLOAD command. Also note and
print errors in rewinds and unloads (and errors in writing closing filemarks
for same).
allocation strategy no longer works at all. Move pmap.new.* to pmap.*.
To read the revision history of PMAP_NEW up until this merge, use cvs
rlog of the old pmap.new.* files.
allocation strategy no longer works at all. Move pmap.new.* to pmap.*.
To read the revision history of PMAP_NEW up until this merge, use cvs
rlog of the old pmap.new.* files.
pmap_change_wiring(...,FALSE) unless the map entry claims the address
is unwired. This fixes the following scenario, as described on
tech-kern@netbsd.org on Wed 6/16/1999 12:25:23:
- User mlock(2)'s a buffer, to guarantee it will never become
non-resident while he is using it.
- User then does physio to that buffer. Physio calls uvm_vslock()
to lock down the pages and ensure that page faults do not happen
while the I/O is in progress (possibly in interrupt context).
- Physio does the I/O.
- Physio calls uvm_vsunlock(). This calls uvm_fault_unwire().
>>> HERE IS WHERE THE PROBLEM OCCURS <<<
uvm_fault_unwire() calls pmap_change_wiring(..., FALSE),
which now gives the pmap free reign to recycle the mapping
information for that page, which is illegal; the mapping is
still wired (due to the mlock(2)), but now access of the
page could cause a non-protection page fault (disallowed).
NOTE: This could eventually lead to a panic when the user
subsequently munlock(2)'s the buffer and the mapping info
has been recycled for use by another mapping!
the map be at least read-locked to call this function. This requirement
will be taken advantage of in a future commit.
* Write a uvm_fault_unwire() wrapper which read-locks the map and calls
uvm_fault_unwire_locked().
* Update the comments describing the locking contraints of uvm_fault_wire()
and uvm_fault_unwire().
semantics. That is, regardless of the number of mlock/mlockall calls,
an munlock/munlockall actually unlocks the region (i.e. sets wiring count
to 0).
Add a comment describing why uvm_map_pageable() should not be used for
transient page wirings (e.g. for physio) -- note, it's currently only
(ab)used in this way by a few pieces of code which are known to be
broken, i.e. the Amiga and Atari pmaps, and i386 and pc532 if PMAP_NEW is
not used. The i386 GDT code uses uvm_map_pageable(), but in a safe
way, and could be trivially converted to use uvm_fault_wire() instead.
* Provide POSIX 1003.1b mlockall(2) and munlockall(2) system calls.
MCL_CURRENT is presently implemented. MCL_FUTURE is not fully
implemented. Also, the same one-unlock-for-every-lock caveat
currently applies here as it does to mlock(2). This will be
addressed in a future commit.
* Provide the mincore(2) system call, with the same semantics as
Solaris.
* Clean up the error recovery in uvm_map_pageable().
* Fix a bug where a process would hang if attempting to mlock a
zero-fill region where none of the pages in that region are resident.
[ This fix has been submitted for inclusion in 1.4.1 ]
contents, a substantial optimization if the work load is right: if enough
empty segments are available, the cleaner never has to read or write *any*
blocks except those on the Ifile. When the cleaner wakes up it marks all
empty segments clean before deciding whether any further segments need to
be cleaned.
Fixed overflow bugs in the cleaner's handling of the cost/benefit metric
for empty segments.
a bug in fragment extension that could run the count negative. Also, don't
overcount for inodes, and don't count segment summaries. Thus, for empty
segments the live bytes count should now be exactly zero.
some internal reference counting on PT pages. We still allocate them
with the page fault routine (a wire-fault, now), but no longer free
PT pages from pmap_pageable().