Fix and document naming convention for vnode variables (always use
lvp/lvpp and uvp/uvpp instead of a hash of cvp, vpp, dvpp, pvp, pvpp).
Delete old stale #if 0'ed code at the end.
Change error path code in getcwd_getcache() slightly (merge common
cleanup code; shouldn't affect behavior any).
mp->mnt_flags & MNT_MWAIT is replaced by mp->mnt_wcnt, and a new mount
flag MNT_GONE is created (reusing the same bit).
In insmntque(), add DIAGNOSTIC check to fail if the filesystem vnode
is being moved to is in the process of being unmounted.
getnewvnode() now protects the list of vnodes active on mp with
vfs_busy()/vfs_unbusy().
To avoid generating spurious errors during a doomed unmount, change
the "wait for unmount to finish" protocol between dounmount() and
vfs_busy(). In vfs_busy(), instead of only sleeping once, sleep until
either MNT_UNMOUNT is clear or MNT_GONE is set; also, maintain a count
of waiters in mp->mnt_wcnt so that dounmount() knows when it's safe to
free mp.
tested by running a "while :; do mount /d1; umount -f /d1; done" loop
against multiple find(1) processes.
(Sorry for a big commit, I can't separate this into several pieces...)
Pls check sys/netinet6/TODO and sys/netinet6/IMPLEMENTATION for details.
- sys/kern: do not assume single mbuf, accept chained mbuf on passing
data from userland to kernel (or other way round).
- "midway" ATM card: ATM PVC pseudo device support, like those done in ALTQ
package (ftp://ftp.csl.sony.co.jp/pub/kjc/).
- sys/netinet/tcp*: IPv4/v6 dual stack tcp support.
- sys/netinet/{ip6,icmp6}.h, sys/net/pfkeyv2.h: IETF document assumes those
file to be there so we patch it up.
- sys/netinet: IPsec additions are here and there.
- sys/netinet6/*: most of IPv6 code sits here.
- sys/netkey: IPsec key management code
- dev/pci/pcidevs: regen
In my understanding no code here is subject to export control so it
should be safe.
listen/accept (PR_LISTEN flag in protosw) and detect obvious faults in
parameters passed. It is still possible for the address used for copying
the socket information to become invalid between that check and the copyout
so close the connection's allocated fd if the copyout fails so that we can
return EFAULT without allocating an fd and the application not knowing about
it. Ideally we'd be able to queue the connection back up so a later accept
could retrieve it but unfortunately that's not possible.
which use uvm_vslock() should now test the return value. If it's not
KERN_SUCCESS, wiring the pages failed, so the operation which is using
uvm_vslock() should error out.
XXX We currently just EFAULT a failed uvm_vslock(). We may want to do
more about translating error codes in the future.
the process (i.e. pre-Reno behavior). The 4.4BSD behavior (introduced
in Reno) caused transient errors to stick incorrectly.
This is from PR #7640 (Havard Eidnes), cross-checked w/ FreeBSD, where
Bill Fenner committed the same fix (as described in a comment in the
Vat sources, by Van Jacobsen).
looking up a kernel address, check to see if the address is on this
"interrupt-safe" list. If so, return failure immediately. This prevents
a locking screw if a page fault is taken on an interrupt-safe map in or
out of interrupt context.
has PAGEABLE and INTRSAFE flags. PAGEABLE now really means "pageable",
not "allocate vm_map_entry's from non-static pool", so update all map
creations to reflect that. INTRSAFE maps are maps that are used in
interrupt context (e.g. kmem_map, mb_map), and thus use the static
map entry pool (XXX as does kernel_map, for now). This will eventually
change now these maps are locked, as well.
vslocking here?! copyout() on its own seems to suffice just about everwhere
else, and it's not like the process is going to exit; it's in a system
call!
serious race condition in sosend(). Upon closer inspection, the appropriate
flags are checked within splsoftnet() for soreceive(), so no change needed
there. Also a little KNFing in sosend().
the child inherits the stack pointer from the parent (traditional
behavior). Like the signal stack, the stack area is secified as
a low address and a size; machine-dependent code accounts for stack
direction.
This is required for clone(2).
parent, specified at fork time. Specify a new flag to wait4(2), WALTSIG,
to wait for processes which use an alternate exit signal.
This is required for clone(2).
-a subregion start was ignored if all previous allocations were before
the subregion, reported by Lennart Augustsson in PR kern/7539
-an existing allocation which overlaps the beginning of the subregion
was ignored (ie overlapped) if is is not the last allocation
given buffer, and if necessary, reducing the display width of the
number to fit in the buffer by increasing the units (from kilobytes
(2^10) through to exabytes (2^60)).
count is 0, wait for use count to drain before finishing the close.
This is necessary in order for multiple processes to safely share file
descriptor tables.
* add arguments describing the vnode and ecoff header of the executable
being set up to the [onz]magic setup functions.
* export the stack setup function and the [onz]magic setup functions.
* call the MD ecoff hook _before_ the [onz]magic and stack setup
functions, and bail out early if the MD hook sets up vmcmds.
- Initialize mbpool and mclpool with msize and mclbytes, respectively,
so that those values may be patched and have an actual affect on the
next system reboot.
- Set low water marks on mbpool (default: 16) and mclpool (default: 8).
This should be of great help for diskless systems, which need to allocate
mbufs in order to clean dirty pages; the low water marks increase the
chances of this being possible to do in memory starvation situations.
- Add support for getting/setting some mbuf-related parameters via sysctl.
* msize and mclsize (read-only)
* nmbclusters (read-only unless the platform has direct-mapped pool pages,
in which case the value can be increased).
* mblowat and mcllowat (read/write)
conf/param.c, and move the initialisation of the sb_max
variable from kern/uipc_socket2.c to conf/param.c. Now
everthing that includes sys/socketvar.h doesn't get
recompiled when SB_MAX's value changes.
The problem is that if "sl" is a symbolic link, a lookup on "sl/"
will be flagged as the last component. Thus VOP_LOOKUP will lock
the parent directory if LOCKPARENT is set. In order for the symbolic
link to be resolved, this lock needs to be released. namei() would
test for this by checking if ni_pathlen == 1, which it wouldn't as
"/" is left in the name, and namei() would not unlock the parent.
The next call to lookup() to resolve the symbolic link would fail
as the parent was still locked.
initialized). This lock also protects the "next drain candidate" pointer.
XXX There is still one locking protocol problem, which should not be
a problem in practice, but is still marked as an issue in the code anyhow.
a data structure after it was freed. This wasn't actually a problem,
and the change caused the wrong pool_item_header to be freed
in the non-PR_PHINPAGE case.
Call configure() directly immediately after config_init().
This causes autoconfiguration to happen at the same time as before, but
creates some kernel submaps earlier, so that e.g. mbinit() can now
allocate memory.
- Protect userspace from unnecessary header inclusions (as noted on
current-users).
- Some const poisioning.
- GREATLY simplify the locking protocol, and fix potential deadlock
scenarios. In particular, assume that the back-end page allocator
provides its own locking mechanism (this is currently true for all
such allocators in the NetBSD kernel). Doing so allows us to simply
use one spin lock for serialized access to all r/w members of the pool
descriptor. The spin lock is released before calling the back-end
allocator, and re-acquired upon return from it.
- Fix a problem in pr_rmpage() where a data structure was referenced
after it was freed.
- Minor tweak to page manaement. Migrate both idle and empty pages
to the end of the page list. As soon as a page becomes un-empty
(by a pool_put()), place it at the head of the page list, and set
curpage to point to it. This reduces fragmentation as well as the
time required to find a non-empty page as soon as curpage becomes
empty again.
- Use mono_time throughout, and protect access to it w/ splclock().
- In pool_reclaim(), if freeing an idle page would reduce the number
of allocatable items to below the low water mark, don't.
NMBCLUSTERS for the mbuf cluster pool. On platforms which use direct-mapped
segments for pool pages (MIPS and Alpha), this makes NMBCLUSTERS actually
meaningful (such ports don't even allocate mb_map, as it is not used to
map mbuf cluster pages).
Improve the message logged at a maximum rate of once per second. The
new message: "WARNING: mclpool limit reached; increase NMBCLUSTERS".
In the back-end pool page allocator, remove the message about mb_map
being full. The message was not necessarily correct as the allocator
may have been starved for pages, rather than for space in the map. Also,
the hard limit on the mbuf cluster pool will be reached before the map
fills (the last cluster will always fit into the map), so the message
is redundant.
Add a comment in mbinit() about considering setting low water marks on
the mbuf and mbuf cluster pools.
- Add support for hard limits, with optional rate-limited logging of
a warning message when the pool limit is reached. (This will be used
to fix a bug in mbuf cluster allocation on the MIPS and Alpha ports.)
- Fix some locking protocol errors. This required splitting pr_flags
into pr_flags (which is protected by the spin lock) and pr_roflags (which
are `read only' flags, set when the pool is initialized, and never changed
again; these do not need to be protected by a mutex).
- Make the low water support actually mean something. When a low water
mark is set, add free items to the pool until the low water mark is
reached. When an item allocation causes the number of free items to
drop below the low water mark, make the pool catch up to it. This can
make the pool allocator more useful for several applications (e.g.
pmap `pv entry' management) and more robust for others (for e.g. mbuf
and mbuf cluster allocation, so that the pagedaemon can use NFS to clean
pages on diskless systems without completely running dry on buffers to
receive packets in during extreme memory shoratages).
- Add a comment where we sleep waiting for more pages for the back-end
page allocator. Specifically, instead of sleeping potentially forever,
perhaps we should just wake up once a second to try allocating a page
again. XXX Revisit this soon.
and system call now just return EFAULT). A complete fix will
presumably have to wait for UBC and/or for vnode locking protocols to
be revamped to allow use of shared locks.
same uid or by root.
This code is from FreeBSD. (Whilst it was originally obtained from OpenBSD,
FreeBSD fixed it to work with multicast. To quote the commit message:
- Don't bother checking for conflicting sockets if we're binding to a
multicast address.
- Don't return an error if we're binding to INADDR_ANY, the conflicting
socket is bound to INADDR_ANY, and the conflicting socket has
SO_REUSEPORT set.
)
directories which aren't under the recipient's root.
Clean up of many error conditions involving descriptor passing, to
eliminate infinite loops, panics, premature garbage collection of
sockets, and descriptor leaks:
- Avoid letting unp_gc() see descriptors with a refcount of zero by
removing them from the socket's queue before releasing them.
- Avoid socket leak in PRU_ABORT (this will also gc descriptors queued
on a not-yet accepted socket when the accepting socket goes away).
- Put in block comment explaining how unp_gc() should work.
- Correctly manage unp_defer count so we don't get stuck in an infinite
loop with nothing to do.
- Don't tie MARK and DEFER bits so closely together.
- Mark descriptors queued on not-yet-accepted sockets as well.
- Don't call sorflush on non-socket, it doesn't work very well.
- Deal with discard of NULL file pointer.
- Hopefully cause GC to converge faster by only deferring sockets in
unp_mark().
limit into account when checking against the limit; fdp->fd_nfiles may
be greater than the current descriptor limit, and there may be space
in fdp->fd_ofiles beyond the limit. If we say it's available,
unp_externalize will get confused and panic when fdalloc fails.
deadlock in VOP_FSYNC() if the unreferenced vnode picked for
reclamation happened to be stacked on top of a vnode the process
already had locked. This could happen if the same filesystem was
accessed both through a union mount and directly; it seemed to happen
most frequently when the direct access was through NFS.
Avoid this deadlock by changing vinvalbuf to pass a new FSYNC_RECLAIM
flag bit to VOP_FSYNC() to indicate that a reclaim is in progress and
only a `shallow' fsync is necessary.
Do nothing in *_fsync() in umapfs, nullfs, and unionfs when
FSYNC_RECLAIM is set; the underlying vnodes will shortly be released
in *_reclaim and may be reclaimed (and fsync'ed) later.
was changes to comments only, but..)
Build vfs_getcwd.c as standard part of kernel.
Add implementation of fchroot(), since two emulations already had it.
Call vn_isunder() in fchdir(), chroot(), and fchroot() to make it harder
to escape chroot().
Add kernel implementation of getcwd() which uses this cache, falling
back to reading the filesystem on a cache miss.
Along for the ride: add new VOP_FSYNC flag FSYNC_RECLAIM indicating
that a reclaim is being done, so only a "shallow" fsync is needed.
some things (e.g. unionfs) may depend on it. It's currently ok
for vnodecovered to be set already; it's not for v_mountedhere in
the vnode, though.
From John Darrow.
XXX should probably just extend VFS_MOUNT to take the vnode pointer as
an argument.
since the lock may be taken again. This was the intention of the CANRECURSE
lock already there, but didn't work.
Only fill in the vnode<->mountpoint links (mountedhere and vnodecovered)
after VFS_MOUNT returned succesfully. It might happen that something called
from VFS_MOUNT mistook the vnode for an already successfully mounted on
one because of this.
could be done in one of 2 ways:
* call lk_init with LK_CANRECURSE, resulting in a lock that
always can be used recursively.
* call lockmgr with LK_CANRECURSE, meaning that it's ok if this
lock is already held by us.
Sometimes we need a locking type that says: take this lock now, exclusively,
but while I am holding it, I may go through a code path which could attempt
to get the lock again, and which is unaware that the lock might already
be taken. Implement LK_SETRECURSE for this purpose. Assume that locks and
unlocks come in matching pairs (they should), and check for this 'level'
using SETRECURSE locks.
to pass down a locked node. Modify union_copyup() to call VOP_CLOSE
locked nodes.
Also fix a bug in union_copyup() where a lock on the lower vnode would
only be released if VOP_OPEN didn't fail.
* fix the ancient nice(1) bug, where nice +20 processes incorrectly
steal 10 - 20% of the CPU, (or even more depending on load average)
* provide a new schedclk() mechanism at a new clock at schedhz, so high
platform hz values don't cause nice +0 processes to look like they are
niced
* change the algorithm slightly, and reorganize the code a lot
* fix percent-CPU calculation bugs, and eliminate some no-op code
=== nice bug === Correctly divide the scheduler queues between niced and
compute-bound processes. The current nice weight of two (sort of, see
`algorithm change' below) neatly divides the USRPRI queues in half; this
should have been used to clip p_estcpu, instead of UCHAR_MAX. Besides
being the wrong amount, clipping an unsigned char to UCHAR_MAX is a no-op,
and it was done after decay_cpu() which can only _reduce_ the value. It
has to be kept <= NICE_WEIGHT * PRIO_MAX - PPQ or processes can
scheduler-penalize themselves onto the same queue as nice +20 processes.
(Or even a higher one.)
=== New schedclk() mechansism === Some platforms should be cutting down
stathz before hitting the scheduler, since the scheduler algorithm only
works right in the vicinity of 64 Hz. Rather than prescale hz, then scale
back and forth by 4 every time p_estcpu is touched (each occurance an
abstraction violation), use p_estcpu without scaling and require schedhz
to be generated directly at the right frequency. Use a default stathz (well,
actually, profhz) / 4, so nothing changes unless a platform defines schedhz
and a new clock. Define these for alpha, where hz==1024, and nice was
totally broke.
=== Algorithm change === The nice value used to be added to the
exponentially-decayed scheduler history value p_estcpu, in _addition_ to
be incorporated directly (with greater wieght) into the priority calculation.
At first glance, it appears to be a pointless increase of 1/8 the nice
effect (pri = p_estcpu/4 + nice*2), but it's actually at least 3x that
because it will ramp up linearly but be decayed only exponentially, thus
converging to an additional .75 nice for a loadaverage of one. I killed
this, it makes the behavior hard to control, almost impossible to analyze,
and the effect (~~nothing at for the first second, then somewhat increased
niceness after three seconds or more, depending on load average) pointless.
=== Other bugs === hz -> profhz in the p_pctcpu = f(p_cpticks) calcuation.
Collect scheduler functionality. Try to put each abstraction in just one
place.
Also:
* Do the boundary check when creating a new region as well.
* If we crossed the boundary, don't just throw away the region; lop off the
beginning and see if we still fit.
SB16 is now fully functional on the Alpha.
arguments passed to accept(), bind(), connect(), getpeername(), getsockname(),
getsockopt(), recvfrom(), sendto() and sendmsg() unsigned, which also elimiates
a few casts.
* Reflect the (now) signedness of msg_iovlen, which necessiates the addition
of a few casts.
- no longer conditionalized
- when traced, charge time to real parent, not debugger
- make it clear for future rototillers that p_estcpu should be moved
to the "copy" region of struct proc.
parents for children's p_estcpu. I think this problem has always been there.
It's particularly noticable with X because the server builds up non-trivial
CPU, and hence, non-trivial p_estcpu scheduler penalty. The repeatedly
forked children were always starting from scratch and receiving a scheduler
preference.
of processes:
- Don't initialize rlim_max to RLIM_INFINITY. The limits for those should
be maxfiles and maxproc respectively. Programs expect getrlimit to
return reasonable values, so that they can allocate structures (for
example jdk does this).
- Don't initialize rlim_cur to NOFILE and MAXUPRC respectively, but to
min(NOFILE, maxfiles) and min(MAXUPRC, maxproc) respectively.
data after the cmsghdr when accessing internalized SCM_RIGHTS messages
(i.e. array of struct file *s). The historic interface does not align
the externalized SCM_RIGHTS messages (i.e. array of ints).
date: 1998/08/01 01:47:24; author: thorpej; state: Exp; lines: +18 -8
Don't call the protocol drain routines if how == M_NOWAIT, which typically
means we're in interrupt context. Since we can be called from a network
hardware interrupt, we could corrupt the protocol queues we try to drain
them at that time.
The problem has been addressed by letting the drain'able protocols use
a locking scheme to prevent queue corruption.
Initialize pool hash table with PR_HASHTABSIZE (i.e., 8) LIST_INIT()s
instead of one memset().
Only check for page != ph->ph_page if PR_PHINPAGE is set (in pool_chk()).
Print pool base pointer when reporting page inconsistency in pool_chk().
set SS_MORETOCOME as a hint to the lower layer that more data is coming
on the next iteration of the loop. Clear the flag after the PRU_SEND
call.
Suggested by Justin Walker <justin@apple.com> on the freebsd-net
mailing list.
There are two reasons for this:
* We should be able to pass file descriptors without sending any data.
* We could send zero-length iovecs anyway (but we shouldn't have to do this).
Also, msg_iovlen is already a u_int, so delete a bunch of casts.
by Ken Hornstein and myself.
Add flags to struct device, and define one as "active". Devices are
initially active from config_attach(). Their active state may be changed
via config_activate() and config_deactivate().
These new functions assume that the device being manipulated will recursively
perform the action on its children.
Together, config_deactivate() and config_detach() may be used to implement
interrupt-driven device detachment. config_deactivate() will take care of
things that need to be performed at interrupt time, and config_detach()
(which must run in a valid thread context) finishes the job, which may
block.
kthread_create(). Implement kthread_exit() (causes a thrad to exit).
Set P_NOCLDWAIT on kernel threads, which will cause any of their children
to be reparented to init(8) (which is already prepared to wait out orphaned
processes).
in the future):
- New function, fork_kthread(), takes entry point, argument for entry point,
and comment for new proc. May be called by any context, will fork the
thread from proc0 (requires slight changes to cpu_fork()).
- cpu_set_kpc() now takes a third argument, a void *arg to pass to the
thread entry point. Thread entry point now takes void * instead of
struct proc *.
- Create the pagedaemon and reaper kernel threads using fork_kthread().
DDB_ONPANIC. Lets user ignore breaks but enter DDB on panic. Intended
for machines where debug on panic is useful, but DDB entry is not,
(public-access console, or terminal-servers which send spurious breaks)
Add new ddb hook, console_debugger(), which decides whether or not to
ignore console ddb requests. Console drivers should be updated to call
console_debugger(), not Debugger(), in response to serial-console
break or ddb keyboard sequence.
* Change the usage of B_DONE so that it is only set when a buffer is in sync
with the data on disk.
* If a buffer is being waited for, don't put it on the age queue.
* Make sure to clear B_DONE when pages are stolen from a buffer.
* Make sure to clear B_CACHE after each use.
* If we find a buffer for the block we want with valid data, but it is too
small, panic. (This isn't supposed to happen.)
Fixes potential file corruption problems with clustering.