Update coda to new struct lock in struct vnode.
make fdescfs, kernfs, portalfs, and procfs actually lock their vnodes.
It's not that hard.
Make unionfs set v_vnlock = NULL so any overlayed fs will call its
VOP_LOCK.
the functionality of nullfs. The latter is now just a mount & unmount
routine, and a few tables. umapfs borrow most of this infrastructure.
Both fs's are now nfs-exportable.
All layered fs's share a common format to private mount & private
vnode structs (which a particular fs can extend).
Also add genfs_noerr_rele(), a vnode op which will vrele/vput
operand vnodes appropriately.
RAIDframe driver to stop it from eating too much kernel memory when
writing data. But that fix had a nasty side-affect of hurting write
performance (*much* more than I thought it would). These changes nuke
that "fix", and instead put in a more reasonable mechanism for limiting
the number of simultaneous IO's which can be happening for each RAID device.
The result is a noticeable improvement in write throughput. The End.
struct lock. Will permit layered fs's to share locks with underlying
vnodes.
Also reduce the max # of vnodes passable in a VOP from 16 to 8. As the
most we pass is 4, this shoudn't be a problem. In addition to WILLRELE
flags, add WILLUNLOCK flags to indicate that the VOP will unlock the
vnode. Add WILLPUT flags (WILLUNLOCK | WILLRELE) to indicate that the
vop will vput the passed-in vnode.
and has unlocked the parrent vnode. Should only actually be returned if
the fs needs to unlock the parrent and has difficulty re-locking the
parrent. Needed so that layered fs's can keep track of locking.
"/usr" must be mounted. mount_critical_filesystems() didn't mount it,
even if listed in "critical_filesystems", if it is nfs.
Solution: introduce another rc.conf variable
"critical_filesystems_beforenet" which contains filesystems to be mounted
before "netstart".
Perhaps "netstart" should be split up, but this would make things even
more complex...
pages.
XXX This should be handled better in the future, probably by marking the
XXX page as released, and making uvm_pageunwire() free the page when
XXX the wire count on a released page reaches zero.
* Implement MADV_DONTNEED: deactivate pages in the specified range,
semantics similar to Solaris's MADV_DONTNEED.
* Add MADV_FREE: free pages and swap resources associated with the
specified range, causing the range to be reloaded from backing
store (vnodes) or zero-fill (anonymous), semantics like FreeBSD's
MADV_FREE and like Digital UNIX's MADV_DONTNEED (isn't it SO GREAT
that madvise(2) isn't standardized!?)
As part of this, move the non-map-modifying advice handling out of
uvm_map_advise(), and into sys_madvise().
As another part, implement general amap cleaning in uvm_map_clean(), and
change uvm_map_clean() to only push dirty pages to disk if PGO_CLEANIT
is set in its flags (and update sys___msync13() accordingly). XXX Add
a patchable global "amap_clean_works", defaulting to 1, which can disable
the amap cleaning code, just in case problems are unearthed; this gives
a developer/user a quick way to recover and send a bug report (e.g. boot
into DDB and change the value).
XXX Still need to implement a real uao_flush().
XXX Need to update the manual page.
With these changes, rebuilding libc will automatically cause the new
malloc(3) to use MADV_FREE to actually release pages and swap resources
when it decides that can be done.
* Nothing currently uses this return value.
* It's arguably an abstraction violation.
Fix amap_unadd()'s API to be consistent w/ amap_add()'s: rather than
take a vm_amap * and a slot number, take a vm_aref * and an offset.
It's now actually possible to use amap_unadd() to remove an anon from
an amap.