and system call now just return EFAULT). A complete fix will
presumably have to wait for UBC and/or for vnode locking protocols to
be revamped to allow use of shared locks.
same uid or by root.
This code is from FreeBSD. (Whilst it was originally obtained from OpenBSD,
FreeBSD fixed it to work with multicast. To quote the commit message:
- Don't bother checking for conflicting sockets if we're binding to a
multicast address.
- Don't return an error if we're binding to INADDR_ANY, the conflicting
socket is bound to INADDR_ANY, and the conflicting socket has
SO_REUSEPORT set.
)
directories which aren't under the recipient's root.
Clean up of many error conditions involving descriptor passing, to
eliminate infinite loops, panics, premature garbage collection of
sockets, and descriptor leaks:
- Avoid letting unp_gc() see descriptors with a refcount of zero by
removing them from the socket's queue before releasing them.
- Avoid socket leak in PRU_ABORT (this will also gc descriptors queued
on a not-yet accepted socket when the accepting socket goes away).
- Put in block comment explaining how unp_gc() should work.
- Correctly manage unp_defer count so we don't get stuck in an infinite
loop with nothing to do.
- Don't tie MARK and DEFER bits so closely together.
- Mark descriptors queued on not-yet-accepted sockets as well.
- Don't call sorflush on non-socket, it doesn't work very well.
- Deal with discard of NULL file pointer.
- Hopefully cause GC to converge faster by only deferring sockets in
unp_mark().
limit into account when checking against the limit; fdp->fd_nfiles may
be greater than the current descriptor limit, and there may be space
in fdp->fd_ofiles beyond the limit. If we say it's available,
unp_externalize will get confused and panic when fdalloc fails.
deadlock in VOP_FSYNC() if the unreferenced vnode picked for
reclamation happened to be stacked on top of a vnode the process
already had locked. This could happen if the same filesystem was
accessed both through a union mount and directly; it seemed to happen
most frequently when the direct access was through NFS.
Avoid this deadlock by changing vinvalbuf to pass a new FSYNC_RECLAIM
flag bit to VOP_FSYNC() to indicate that a reclaim is in progress and
only a `shallow' fsync is necessary.
Do nothing in *_fsync() in umapfs, nullfs, and unionfs when
FSYNC_RECLAIM is set; the underlying vnodes will shortly be released
in *_reclaim and may be reclaimed (and fsync'ed) later.
was changes to comments only, but..)
Build vfs_getcwd.c as standard part of kernel.
Add implementation of fchroot(), since two emulations already had it.
Call vn_isunder() in fchdir(), chroot(), and fchroot() to make it harder
to escape chroot().
Add kernel implementation of getcwd() which uses this cache, falling
back to reading the filesystem on a cache miss.
Along for the ride: add new VOP_FSYNC flag FSYNC_RECLAIM indicating
that a reclaim is being done, so only a "shallow" fsync is needed.
some things (e.g. unionfs) may depend on it. It's currently ok
for vnodecovered to be set already; it's not for v_mountedhere in
the vnode, though.
From John Darrow.
XXX should probably just extend VFS_MOUNT to take the vnode pointer as
an argument.
since the lock may be taken again. This was the intention of the CANRECURSE
lock already there, but didn't work.
Only fill in the vnode<->mountpoint links (mountedhere and vnodecovered)
after VFS_MOUNT returned succesfully. It might happen that something called
from VFS_MOUNT mistook the vnode for an already successfully mounted on
one because of this.
could be done in one of 2 ways:
* call lk_init with LK_CANRECURSE, resulting in a lock that
always can be used recursively.
* call lockmgr with LK_CANRECURSE, meaning that it's ok if this
lock is already held by us.
Sometimes we need a locking type that says: take this lock now, exclusively,
but while I am holding it, I may go through a code path which could attempt
to get the lock again, and which is unaware that the lock might already
be taken. Implement LK_SETRECURSE for this purpose. Assume that locks and
unlocks come in matching pairs (they should), and check for this 'level'
using SETRECURSE locks.
to pass down a locked node. Modify union_copyup() to call VOP_CLOSE
locked nodes.
Also fix a bug in union_copyup() where a lock on the lower vnode would
only be released if VOP_OPEN didn't fail.
* fix the ancient nice(1) bug, where nice +20 processes incorrectly
steal 10 - 20% of the CPU, (or even more depending on load average)
* provide a new schedclk() mechanism at a new clock at schedhz, so high
platform hz values don't cause nice +0 processes to look like they are
niced
* change the algorithm slightly, and reorganize the code a lot
* fix percent-CPU calculation bugs, and eliminate some no-op code
=== nice bug === Correctly divide the scheduler queues between niced and
compute-bound processes. The current nice weight of two (sort of, see
`algorithm change' below) neatly divides the USRPRI queues in half; this
should have been used to clip p_estcpu, instead of UCHAR_MAX. Besides
being the wrong amount, clipping an unsigned char to UCHAR_MAX is a no-op,
and it was done after decay_cpu() which can only _reduce_ the value. It
has to be kept <= NICE_WEIGHT * PRIO_MAX - PPQ or processes can
scheduler-penalize themselves onto the same queue as nice +20 processes.
(Or even a higher one.)
=== New schedclk() mechansism === Some platforms should be cutting down
stathz before hitting the scheduler, since the scheduler algorithm only
works right in the vicinity of 64 Hz. Rather than prescale hz, then scale
back and forth by 4 every time p_estcpu is touched (each occurance an
abstraction violation), use p_estcpu without scaling and require schedhz
to be generated directly at the right frequency. Use a default stathz (well,
actually, profhz) / 4, so nothing changes unless a platform defines schedhz
and a new clock. Define these for alpha, where hz==1024, and nice was
totally broke.
=== Algorithm change === The nice value used to be added to the
exponentially-decayed scheduler history value p_estcpu, in _addition_ to
be incorporated directly (with greater wieght) into the priority calculation.
At first glance, it appears to be a pointless increase of 1/8 the nice
effect (pri = p_estcpu/4 + nice*2), but it's actually at least 3x that
because it will ramp up linearly but be decayed only exponentially, thus
converging to an additional .75 nice for a loadaverage of one. I killed
this, it makes the behavior hard to control, almost impossible to analyze,
and the effect (~~nothing at for the first second, then somewhat increased
niceness after three seconds or more, depending on load average) pointless.
=== Other bugs === hz -> profhz in the p_pctcpu = f(p_cpticks) calcuation.
Collect scheduler functionality. Try to put each abstraction in just one
place.
Also:
* Do the boundary check when creating a new region as well.
* If we crossed the boundary, don't just throw away the region; lop off the
beginning and see if we still fit.
SB16 is now fully functional on the Alpha.
arguments passed to accept(), bind(), connect(), getpeername(), getsockname(),
getsockopt(), recvfrom(), sendto() and sendmsg() unsigned, which also elimiates
a few casts.
* Reflect the (now) signedness of msg_iovlen, which necessiates the addition
of a few casts.
- no longer conditionalized
- when traced, charge time to real parent, not debugger
- make it clear for future rototillers that p_estcpu should be moved
to the "copy" region of struct proc.