in vfs_detach(). vfs_done may free global filesystem's resources,
typically those allocated in respective filesystem's init function.
Needed so those filesystems which went in via LKM have a chance to
clean after themselves before unloading. This fixes random panics
when LKM for filesystem using pools was loaded and unloaded several
times.
For each leaf filesystem, add appropriate vfs_done routine.
UFS code, and I forgot to rename the "ihash" variable, causing
weird effects, because 3/4th of the UFS hash table would become
unreachable after procfs was loaded as an LKM.
process is about to exec a sugid binary.
To speed up things, use hashing for vnode allocation, like other filesystems
do. This avoids walking the whole procfs node list in the revoke case too.
default, as the copyright on the main file (ffs_softdep.c) is such
that is has been put into gnusrc. options SOFTDEP will pull this
in. This code also contains the trickle syncer.
Bump version number to 1.4O
not set, unlock the vnode before calling the device's close routine and
relock it after it returns. tty close routines will sleep waiting for
buffers to drain, which won't happen often times as the other side needs
to grab the vnode lock first.
Make all unmount routines lock the device vnode before calling VOP_CLOSE().
Problem turned out to be due to improper handling of reads beyond EOF:
they should just return without error with the uio unchanged, and the
caller will recognize this as a zero-byte return (EOF).
The previous fix to protect directory reads against bogus uio_offset
values returned EINVAL, which broke mount -o union, which only
union'ed in the lower directory if the upper directory cleanly
returned EOF.
While we're here, protect kernfs as well.
call with F_FSCTL set and F_SETFL calls generate calls to a new
fileop fo_fcntl. Add genfs_fcntl() and soo_fcntl() which return 0
for F_SETFL and EOPNOTSUPP otherwise. Have all leaf filesystems
use genfs_fcntl().
Reviewed by: thorpej
Tested by: wrstuden
exists is bogus. The goal here is to produce a synthetic link count
which won't confuse fts and similar routines which "know" that
directories with a link count of 2 don't have subdirectories (and
thus, they can avoid having to stat every entry in the directory
looking for subdirectories which aren't there).
We know that non-UNIX filesystem implementations may return a link
count of `1' for directories with an indeterminate number of
subdirectories; if either the upper or lower layer returns a link
count of `1', return a link count of 1. If both layers return a link
count of 2, return a link count of 2; otherwise, return the sum of the
link count of both layers.
Also, fix PR7430: unionfs ignores read-only mounts. Check for
MNT_RDONLY in union_lookup (more-or-less as in layer_lookup) as well
as union_access() and union_setattr().
Note that a read-only union layer may still cause side effects on the
underlying filesystems... Most notably, we'll still attempt to create
shadow directories in the upper layer. Also, of course, we'll
side-effect atimes in the lower layer.
"panic: lockmgr: using decommisioned lock"
(only if DIAGNOSTIC)
The problem turned out to be due to the way LK_DRAIN was (not) handled
in union_lock; it just got passed through to the lock on the upper
vnode (which got marked as decommissioned, instead of that happening
to the union vnode. When the upper vnode was next locked (typically
when it was released), it went kaboom.
and PID allocation MP-safe. A new process state is added: SDEAD. This
state indicates that a process is dead, but not yet a zombie (has not
yet been processed by the process reaper).
SDEAD processes exist on both the zombproc list (via p_list) and deadproc
(via p_hash; the proc has been removed from the pidhash earlier in the exit
path). When the reaper deals with a process, it changes the state to
SZOMB, so that wait4 can process it.
Add a P_ZOMBIE() macro, which treats a proc in SZOMB or SDEAD as a zombie,
and update various parts of the kernel to reflect the new state.
getnewvnode now checks this bit, and it if's set makes sure a vnode's not
locked before removing it from the free list.
Closes PR 7954 by Alan Barrett <apb@iafrica.com>.
Update coda to new struct lock in struct vnode.
make fdescfs, kernfs, portalfs, and procfs actually lock their vnodes.
It's not that hard.
Make unionfs set v_vnlock = NULL so any overlayed fs will call its
VOP_LOCK.
the functionality of nullfs. The latter is now just a mount & unmount
routine, and a few tables. umapfs borrow most of this infrastructure.
Both fs's are now nfs-exportable.
All layered fs's share a common format to private mount & private
vnode structs (which a particular fs can extend).
Also add genfs_noerr_rele(), a vnode op which will vrele/vput
operand vnodes appropriately.
umap_unlock()) so as to not explicitly depend on nullfs being compiled
into the kernel.
umap_bypass won't be too slow as there are no credentials in these two ops
to need mapping.
count is 0, wait for use count to drain before finishing the close.
This is necessary in order for multiple processes to safely share file
descriptor tables.
deadlock in VOP_FSYNC() if the unreferenced vnode picked for
reclamation happened to be stacked on top of a vnode the process
already had locked. This could happen if the same filesystem was
accessed both through a union mount and directly; it seemed to happen
most frequently when the direct access was through NFS.
Avoid this deadlock by changing vinvalbuf to pass a new FSYNC_RECLAIM
flag bit to VOP_FSYNC() to indicate that a reclaim is in progress and
only a `shallow' fsync is necessary.
Do nothing in *_fsync() in umapfs, nullfs, and unionfs when
FSYNC_RECLAIM is set; the underlying vnodes will shortly be released
in *_reclaim and may be reclaimed (and fsync'ed) later.