The problem was due to an interaction between the doomed unmounts done by
amd and getnewvnode.
I convinced myself that it's ok for getnewvnode() to do a sleeping vfs_busy().
Tested with multiple builds running while another process attempted to unmount
/usr once a second.
too. Remove some needless code duplication by adding a "drain" argument
to the ACQUIRE() macro (compiler can [and does] optimize the constant
conditional).
locking primitive directly to lock it, since those will never attempt
to call printf() to display debugging information (and thus deadlock
on recursion into the kprintf_slock).
- Now compatible with MULTIPROCESSOR (requires other changes not yet
committed, but which will be later today).
- In addition to tracking simple locks, track exclusive spin locks.
- Count spin locks like we do sleep locks (in the cpu_info for this
CPU).
- Lock debug lists are now TAILQs, so as to make the locking order
more obvious when dumping the list.
Also, some suggestions from Bill Sommerfeld:
- SIMPLELOCK_LOCKED and SIMPLELOCK_UNLOCKED constants, which may be
defined in <machine/lock.h> (default to 1 and 0, respectively). This
makes it easier to support architectures which use test-and-clear
rather than test-and-set.
- Add __attribute__((__aligned__)) to the `lock_data' member of the
simplelock structure. This makes it easier to support architectures
which can only perform atomic operations on very-well-aligned memory
locations. NOTE: This changes the size of struct simplelock, and
will cause a version bump.
first in line for the specified identifier. For use in places where
you don't want a Thundering Herd.
While here, add an optimization to wakeup() suggested by Ross Harvey.
calls to reflect this. Also, block statclock rather than softclock during
in the proclist locking functions, to address a problem reported on
current-users by Sean Doran.
write lock when doing PID allocation, and during the process exit path.
Use a read lock every where else, including within schedcpu() (interrupt
context). Note that holding the write lock implies blocking schedcpu()
from running (blocks softclock).
PID allocation is now MP-safe.
Note this actually fixes a bug on single processor systems that was probably
extremely difficult to tickle; it was possible that schedcpu() would run
off a bad pointer if the right clock interrupt happened to come in the
middle of a LIST_INSERT_HEAD() or LIST_REMOVE() to/from allproc.
and PID allocation MP-safe. A new process state is added: SDEAD. This
state indicates that a process is dead, but not yet a zombie (has not
yet been processed by the process reaper).
SDEAD processes exist on both the zombproc list (via p_list) and deadproc
(via p_hash; the proc has been removed from the pidhash earlier in the exit
path). When the reaper deals with a process, it changes the state to
SZOMB, so that wait4 can process it.
Add a P_ZOMBIE() macro, which treats a proc in SZOMB or SDEAD as a zombie,
and update various parts of the kernel to reflect the new state.
body of reaper(), right before the call to uvm_exit(). cpu_wait() must
be done before uvm_exit() because the resources it frees might be located
in the PCB.
remove simplelockrecurse, lockpausetime and PAUSE():
none of these serve any purpose anymore.
in the LOCKDEBUG functions, expand the splhigh() region to
cover the entire function. without this there can still be races.
- When the exit signal is specified to be 0, don't just assume they
meant SIGCHLD. In the Linux world, this appears to mean "don't deliver
an exit signal at all".
- Simplify P_EXITSIG(); don't check against initproc here, just change
the exit signal to SIGCHLD if reparenting to initproc.
A very simple clone(2) test program now works, and the MpegTV package
starts, but doesn't run properly yet (I believe there is a separate
bug which keeps it from working properly).
getnewvnode now checks this bit, and it if's set makes sure a vnode's not
locked before removing it from the free list.
Closes PR 7954 by Alan Barrett <apb@iafrica.com>.
Fix and document naming convention for vnode variables (always use
lvp/lvpp and uvp/uvpp instead of a hash of cvp, vpp, dvpp, pvp, pvpp).
Delete old stale #if 0'ed code at the end.
Change error path code in getcwd_getcache() slightly (merge common
cleanup code; shouldn't affect behavior any).
mp->mnt_flags & MNT_MWAIT is replaced by mp->mnt_wcnt, and a new mount
flag MNT_GONE is created (reusing the same bit).
In insmntque(), add DIAGNOSTIC check to fail if the filesystem vnode
is being moved to is in the process of being unmounted.
getnewvnode() now protects the list of vnodes active on mp with
vfs_busy()/vfs_unbusy().
To avoid generating spurious errors during a doomed unmount, change
the "wait for unmount to finish" protocol between dounmount() and
vfs_busy(). In vfs_busy(), instead of only sleeping once, sleep until
either MNT_UNMOUNT is clear or MNT_GONE is set; also, maintain a count
of waiters in mp->mnt_wcnt so that dounmount() knows when it's safe to
free mp.
tested by running a "while :; do mount /d1; umount -f /d1; done" loop
against multiple find(1) processes.
(Sorry for a big commit, I can't separate this into several pieces...)
Pls check sys/netinet6/TODO and sys/netinet6/IMPLEMENTATION for details.
- sys/kern: do not assume single mbuf, accept chained mbuf on passing
data from userland to kernel (or other way round).
- "midway" ATM card: ATM PVC pseudo device support, like those done in ALTQ
package (ftp://ftp.csl.sony.co.jp/pub/kjc/).
- sys/netinet/tcp*: IPv4/v6 dual stack tcp support.
- sys/netinet/{ip6,icmp6}.h, sys/net/pfkeyv2.h: IETF document assumes those
file to be there so we patch it up.
- sys/netinet: IPsec additions are here and there.
- sys/netinet6/*: most of IPv6 code sits here.
- sys/netkey: IPsec key management code
- dev/pci/pcidevs: regen
In my understanding no code here is subject to export control so it
should be safe.
listen/accept (PR_LISTEN flag in protosw) and detect obvious faults in
parameters passed. It is still possible for the address used for copying
the socket information to become invalid between that check and the copyout
so close the connection's allocated fd if the copyout fails so that we can
return EFAULT without allocating an fd and the application not knowing about
it. Ideally we'd be able to queue the connection back up so a later accept
could retrieve it but unfortunately that's not possible.