conf/param.c, and move the initialisation of the sb_max
variable from kern/uipc_socket2.c to conf/param.c. Now
everthing that includes sys/socketvar.h doesn't get
recompiled when SB_MAX's value changes.
The problem is that if "sl" is a symbolic link, a lookup on "sl/"
will be flagged as the last component. Thus VOP_LOOKUP will lock
the parent directory if LOCKPARENT is set. In order for the symbolic
link to be resolved, this lock needs to be released. namei() would
test for this by checking if ni_pathlen == 1, which it wouldn't as
"/" is left in the name, and namei() would not unlock the parent.
The next call to lookup() to resolve the symbolic link would fail
as the parent was still locked.
initialized). This lock also protects the "next drain candidate" pointer.
XXX There is still one locking protocol problem, which should not be
a problem in practice, but is still marked as an issue in the code anyhow.
a data structure after it was freed. This wasn't actually a problem,
and the change caused the wrong pool_item_header to be freed
in the non-PR_PHINPAGE case.
Call configure() directly immediately after config_init().
This causes autoconfiguration to happen at the same time as before, but
creates some kernel submaps earlier, so that e.g. mbinit() can now
allocate memory.
- Protect userspace from unnecessary header inclusions (as noted on
current-users).
- Some const poisioning.
- GREATLY simplify the locking protocol, and fix potential deadlock
scenarios. In particular, assume that the back-end page allocator
provides its own locking mechanism (this is currently true for all
such allocators in the NetBSD kernel). Doing so allows us to simply
use one spin lock for serialized access to all r/w members of the pool
descriptor. The spin lock is released before calling the back-end
allocator, and re-acquired upon return from it.
- Fix a problem in pr_rmpage() where a data structure was referenced
after it was freed.
- Minor tweak to page manaement. Migrate both idle and empty pages
to the end of the page list. As soon as a page becomes un-empty
(by a pool_put()), place it at the head of the page list, and set
curpage to point to it. This reduces fragmentation as well as the
time required to find a non-empty page as soon as curpage becomes
empty again.
- Use mono_time throughout, and protect access to it w/ splclock().
- In pool_reclaim(), if freeing an idle page would reduce the number
of allocatable items to below the low water mark, don't.
NMBCLUSTERS for the mbuf cluster pool. On platforms which use direct-mapped
segments for pool pages (MIPS and Alpha), this makes NMBCLUSTERS actually
meaningful (such ports don't even allocate mb_map, as it is not used to
map mbuf cluster pages).
Improve the message logged at a maximum rate of once per second. The
new message: "WARNING: mclpool limit reached; increase NMBCLUSTERS".
In the back-end pool page allocator, remove the message about mb_map
being full. The message was not necessarily correct as the allocator
may have been starved for pages, rather than for space in the map. Also,
the hard limit on the mbuf cluster pool will be reached before the map
fills (the last cluster will always fit into the map), so the message
is redundant.
Add a comment in mbinit() about considering setting low water marks on
the mbuf and mbuf cluster pools.
- Add support for hard limits, with optional rate-limited logging of
a warning message when the pool limit is reached. (This will be used
to fix a bug in mbuf cluster allocation on the MIPS and Alpha ports.)
- Fix some locking protocol errors. This required splitting pr_flags
into pr_flags (which is protected by the spin lock) and pr_roflags (which
are `read only' flags, set when the pool is initialized, and never changed
again; these do not need to be protected by a mutex).
- Make the low water support actually mean something. When a low water
mark is set, add free items to the pool until the low water mark is
reached. When an item allocation causes the number of free items to
drop below the low water mark, make the pool catch up to it. This can
make the pool allocator more useful for several applications (e.g.
pmap `pv entry' management) and more robust for others (for e.g. mbuf
and mbuf cluster allocation, so that the pagedaemon can use NFS to clean
pages on diskless systems without completely running dry on buffers to
receive packets in during extreme memory shoratages).
- Add a comment where we sleep waiting for more pages for the back-end
page allocator. Specifically, instead of sleeping potentially forever,
perhaps we should just wake up once a second to try allocating a page
again. XXX Revisit this soon.
and system call now just return EFAULT). A complete fix will
presumably have to wait for UBC and/or for vnode locking protocols to
be revamped to allow use of shared locks.
same uid or by root.
This code is from FreeBSD. (Whilst it was originally obtained from OpenBSD,
FreeBSD fixed it to work with multicast. To quote the commit message:
- Don't bother checking for conflicting sockets if we're binding to a
multicast address.
- Don't return an error if we're binding to INADDR_ANY, the conflicting
socket is bound to INADDR_ANY, and the conflicting socket has
SO_REUSEPORT set.
)
directories which aren't under the recipient's root.
Clean up of many error conditions involving descriptor passing, to
eliminate infinite loops, panics, premature garbage collection of
sockets, and descriptor leaks:
- Avoid letting unp_gc() see descriptors with a refcount of zero by
removing them from the socket's queue before releasing them.
- Avoid socket leak in PRU_ABORT (this will also gc descriptors queued
on a not-yet accepted socket when the accepting socket goes away).
- Put in block comment explaining how unp_gc() should work.
- Correctly manage unp_defer count so we don't get stuck in an infinite
loop with nothing to do.
- Don't tie MARK and DEFER bits so closely together.
- Mark descriptors queued on not-yet-accepted sockets as well.
- Don't call sorflush on non-socket, it doesn't work very well.
- Deal with discard of NULL file pointer.
- Hopefully cause GC to converge faster by only deferring sockets in
unp_mark().
limit into account when checking against the limit; fdp->fd_nfiles may
be greater than the current descriptor limit, and there may be space
in fdp->fd_ofiles beyond the limit. If we say it's available,
unp_externalize will get confused and panic when fdalloc fails.
deadlock in VOP_FSYNC() if the unreferenced vnode picked for
reclamation happened to be stacked on top of a vnode the process
already had locked. This could happen if the same filesystem was
accessed both through a union mount and directly; it seemed to happen
most frequently when the direct access was through NFS.
Avoid this deadlock by changing vinvalbuf to pass a new FSYNC_RECLAIM
flag bit to VOP_FSYNC() to indicate that a reclaim is in progress and
only a `shallow' fsync is necessary.
Do nothing in *_fsync() in umapfs, nullfs, and unionfs when
FSYNC_RECLAIM is set; the underlying vnodes will shortly be released
in *_reclaim and may be reclaimed (and fsync'ed) later.
was changes to comments only, but..)
Build vfs_getcwd.c as standard part of kernel.
Add implementation of fchroot(), since two emulations already had it.
Call vn_isunder() in fchdir(), chroot(), and fchroot() to make it harder
to escape chroot().
Add kernel implementation of getcwd() which uses this cache, falling
back to reading the filesystem on a cache miss.
Along for the ride: add new VOP_FSYNC flag FSYNC_RECLAIM indicating
that a reclaim is being done, so only a "shallow" fsync is needed.
some things (e.g. unionfs) may depend on it. It's currently ok
for vnodecovered to be set already; it's not for v_mountedhere in
the vnode, though.
From John Darrow.
XXX should probably just extend VFS_MOUNT to take the vnode pointer as
an argument.
since the lock may be taken again. This was the intention of the CANRECURSE
lock already there, but didn't work.
Only fill in the vnode<->mountpoint links (mountedhere and vnodecovered)
after VFS_MOUNT returned succesfully. It might happen that something called
from VFS_MOUNT mistook the vnode for an already successfully mounted on
one because of this.
could be done in one of 2 ways:
* call lk_init with LK_CANRECURSE, resulting in a lock that
always can be used recursively.
* call lockmgr with LK_CANRECURSE, meaning that it's ok if this
lock is already held by us.
Sometimes we need a locking type that says: take this lock now, exclusively,
but while I am holding it, I may go through a code path which could attempt
to get the lock again, and which is unaware that the lock might already
be taken. Implement LK_SETRECURSE for this purpose. Assume that locks and
unlocks come in matching pairs (they should), and check for this 'level'
using SETRECURSE locks.
to pass down a locked node. Modify union_copyup() to call VOP_CLOSE
locked nodes.
Also fix a bug in union_copyup() where a lock on the lower vnode would
only be released if VOP_OPEN didn't fail.
* fix the ancient nice(1) bug, where nice +20 processes incorrectly
steal 10 - 20% of the CPU, (or even more depending on load average)
* provide a new schedclk() mechanism at a new clock at schedhz, so high
platform hz values don't cause nice +0 processes to look like they are
niced
* change the algorithm slightly, and reorganize the code a lot
* fix percent-CPU calculation bugs, and eliminate some no-op code
=== nice bug === Correctly divide the scheduler queues between niced and
compute-bound processes. The current nice weight of two (sort of, see
`algorithm change' below) neatly divides the USRPRI queues in half; this
should have been used to clip p_estcpu, instead of UCHAR_MAX. Besides
being the wrong amount, clipping an unsigned char to UCHAR_MAX is a no-op,
and it was done after decay_cpu() which can only _reduce_ the value. It
has to be kept <= NICE_WEIGHT * PRIO_MAX - PPQ or processes can
scheduler-penalize themselves onto the same queue as nice +20 processes.
(Or even a higher one.)
=== New schedclk() mechansism === Some platforms should be cutting down
stathz before hitting the scheduler, since the scheduler algorithm only
works right in the vicinity of 64 Hz. Rather than prescale hz, then scale
back and forth by 4 every time p_estcpu is touched (each occurance an
abstraction violation), use p_estcpu without scaling and require schedhz
to be generated directly at the right frequency. Use a default stathz (well,
actually, profhz) / 4, so nothing changes unless a platform defines schedhz
and a new clock. Define these for alpha, where hz==1024, and nice was
totally broke.
=== Algorithm change === The nice value used to be added to the
exponentially-decayed scheduler history value p_estcpu, in _addition_ to
be incorporated directly (with greater wieght) into the priority calculation.
At first glance, it appears to be a pointless increase of 1/8 the nice
effect (pri = p_estcpu/4 + nice*2), but it's actually at least 3x that
because it will ramp up linearly but be decayed only exponentially, thus
converging to an additional .75 nice for a loadaverage of one. I killed
this, it makes the behavior hard to control, almost impossible to analyze,
and the effect (~~nothing at for the first second, then somewhat increased
niceness after three seconds or more, depending on load average) pointless.
=== Other bugs === hz -> profhz in the p_pctcpu = f(p_cpticks) calcuation.
Collect scheduler functionality. Try to put each abstraction in just one
place.
Also:
* Do the boundary check when creating a new region as well.
* If we crossed the boundary, don't just throw away the region; lop off the
beginning and see if we still fit.
SB16 is now fully functional on the Alpha.
arguments passed to accept(), bind(), connect(), getpeername(), getsockname(),
getsockopt(), recvfrom(), sendto() and sendmsg() unsigned, which also elimiates
a few casts.
* Reflect the (now) signedness of msg_iovlen, which necessiates the addition
of a few casts.
- no longer conditionalized
- when traced, charge time to real parent, not debugger
- make it clear for future rototillers that p_estcpu should be moved
to the "copy" region of struct proc.
parents for children's p_estcpu. I think this problem has always been there.
It's particularly noticable with X because the server builds up non-trivial
CPU, and hence, non-trivial p_estcpu scheduler penalty. The repeatedly
forked children were always starting from scratch and receiving a scheduler
preference.
of processes:
- Don't initialize rlim_max to RLIM_INFINITY. The limits for those should
be maxfiles and maxproc respectively. Programs expect getrlimit to
return reasonable values, so that they can allocate structures (for
example jdk does this).
- Don't initialize rlim_cur to NOFILE and MAXUPRC respectively, but to
min(NOFILE, maxfiles) and min(MAXUPRC, maxproc) respectively.
data after the cmsghdr when accessing internalized SCM_RIGHTS messages
(i.e. array of struct file *s). The historic interface does not align
the externalized SCM_RIGHTS messages (i.e. array of ints).
date: 1998/08/01 01:47:24; author: thorpej; state: Exp; lines: +18 -8
Don't call the protocol drain routines if how == M_NOWAIT, which typically
means we're in interrupt context. Since we can be called from a network
hardware interrupt, we could corrupt the protocol queues we try to drain
them at that time.
The problem has been addressed by letting the drain'able protocols use
a locking scheme to prevent queue corruption.
Initialize pool hash table with PR_HASHTABSIZE (i.e., 8) LIST_INIT()s
instead of one memset().
Only check for page != ph->ph_page if PR_PHINPAGE is set (in pool_chk()).
Print pool base pointer when reporting page inconsistency in pool_chk().
set SS_MORETOCOME as a hint to the lower layer that more data is coming
on the next iteration of the loop. Clear the flag after the PRU_SEND
call.
Suggested by Justin Walker <justin@apple.com> on the freebsd-net
mailing list.
There are two reasons for this:
* We should be able to pass file descriptors without sending any data.
* We could send zero-length iovecs anyway (but we shouldn't have to do this).
Also, msg_iovlen is already a u_int, so delete a bunch of casts.
by Ken Hornstein and myself.
Add flags to struct device, and define one as "active". Devices are
initially active from config_attach(). Their active state may be changed
via config_activate() and config_deactivate().
These new functions assume that the device being manipulated will recursively
perform the action on its children.
Together, config_deactivate() and config_detach() may be used to implement
interrupt-driven device detachment. config_deactivate() will take care of
things that need to be performed at interrupt time, and config_detach()
(which must run in a valid thread context) finishes the job, which may
block.
kthread_create(). Implement kthread_exit() (causes a thrad to exit).
Set P_NOCLDWAIT on kernel threads, which will cause any of their children
to be reparented to init(8) (which is already prepared to wait out orphaned
processes).
in the future):
- New function, fork_kthread(), takes entry point, argument for entry point,
and comment for new proc. May be called by any context, will fork the
thread from proc0 (requires slight changes to cpu_fork()).
- cpu_set_kpc() now takes a third argument, a void *arg to pass to the
thread entry point. Thread entry point now takes void * instead of
struct proc *.
- Create the pagedaemon and reaper kernel threads using fork_kthread().
DDB_ONPANIC. Lets user ignore breaks but enter DDB on panic. Intended
for machines where debug on panic is useful, but DDB entry is not,
(public-access console, or terminal-servers which send spurious breaks)
Add new ddb hook, console_debugger(), which decides whether or not to
ignore console ddb requests. Console drivers should be updated to call
console_debugger(), not Debugger(), in response to serial-console
break or ddb keyboard sequence.
* Change the usage of B_DONE so that it is only set when a buffer is in sync
with the data on disk.
* If a buffer is being waited for, don't put it on the age queue.
* Make sure to clear B_DONE when pages are stolen from a buffer.
* Make sure to clear B_CACHE after each use.
* If we find a buffer for the block we want with valid data, but it is too
small, panic. (This isn't supposed to happen.)
Fixes potential file corruption problems with clustering.
* Increase the size of sigset_t to accomodate 128 signals -- adding new
versions of sys_setprocmask(), sys_sigaction(), sys_sigpending() and
sys_sigsuspend() to handle the changed arguments.
* Abstract the guts of sys_sigaltstack(), sys_setprocmask(), sys_sigaction(),
sys_sigpending() and sys_sigsuspend() into separate functions, and call them
from all the emulations rather than hard-coding everything. (Avoids uses
the stackgap crap for these system calls.)
* Add a new flag (p_checksig) to indicate that a process may have signals
pending and userret() needs to do the full (slow) check.
* Eliminate SAS_ALTSTACK; it's exactly the inverse of SS_DISABLE.
* Correct emulation bugs with restoring SS_ONSTACK.
* Make the signal mask in the sigcontext always use the emulated mask format.
* Store signals internally in sigaction structures, rather than maintaining a
bunch of little sigsets for each SA_* bit.
* Keep track of where we put the signal trampoline, rather than figuring it out
in *_sendsig().
* Issue a warning when a non-emulated sigaction bit is observed.
* Add missing emulated signals, and a native SIGPWR (currently not used).
* Implement the `not reset when caught' semantics for relevant signals.
Note: Only code touched by the i386 port has been modified. Other ports and
emulations need to be updated.
number modulo the given alignment.
To do this the function extent_alloc_subregion() takes an additional `skew'
parameter. For compatibility's sake, this function has been renamed to
extent_alloc_subregion1().
processes.
- Create a new data structure, the proclist_desc, which contains a
pointer to a proclist, and eventually, a pointer to the lock for that
proclist. Declare a static array of proclist_descs, proclists[],
consisting of allproc, deadproc, and zombproc.
The only benefit this provides is that we don't use kmem_map to map the memory
used for name cache entries (though, this is a 13 virtual page savings on my
PPro) since vnodes are never freed (they have their own freelist).
only benefit this provides is that we don't use kmem_map to map the memory
used for vnodes (though, this is a 30 virtual page savings on my PPro)
since vnodes are never freed (they have their own freelist).
* the first one would cause an unnecessary malloc() of iovec storage for
a msg_iovlen of UIO_SMALLIOV although the required amount of memory has
been allocated on the stack.
* the second one would cause a recvmsg() or sendmsg() with a msg_iovlen of
UIO_MAXIOV to fail with EMSGSIZE, which is also a violation of XNS5.
* availability of POSIX Synchronized I/O (kern.synchronized_io),
* maximum number of iovec structures to be used in readv(2) etc. (kern.iov_max)
via sysctl().
* if synchronized I/O file integrity completion of read operations was
requested, set IO_SYNC in the ioflag passed to the read vnode operator.
* if synchronized I/O data integrity completion of write operations was
requested, set IO_DSYNC in the ioflag passed to the write vnode operator.
means we're in interrupt context. Since we can be called from a network
hardware interrupt, we could corrupt the protocol queues we try to drain
them at that time.
called when devices attach, take two.
Note that it is necessary that mbinit() NOT allocate memory, since it
is called before mb_map is created. This is not a problem with the
pool allocator that is now used for mbufs and mbuf clusters.
The read/write system calls return ssize_t because -1 is used to indicate
error, therefore the transfer size MUST be limited to SSIZE_MAX, otherwise
garbage can be returned to the user.
There is NO change from existing behavior here, only a more precise
definition of that the semantics are, except in the Alpha case, where
the full SSIZE_MAX transfer size can now be realized (ssize_t is 64-bit
on the Alpha).
The read/write system calls return ssize_t because -1 is used to indicate
error, therefore the transfer size MUST be limited to SSIZE_MAX, otherwise
garbage can be returned to the user.
There is NO change from existing behavior here, only a more precise
definition of that the semantics are, except in the Alpha case, where
the full SSIZE_MAX transfer size can now be realized (ssize_t is 64-bit
on the Alpha).
- If either an alloc or release function is provided, make sure both are
provided, otherwise panic, as this is a fatal error.
- If using the default allocator, default the pool pagesz to PAGE_SIZE,
since that is the granularity of the default allocator's mechanism.
- In the default allocator, use new functions:
uvm_km_alloc_poolpage()/uvm_km_free_poolpage(), or
kmem_alloc_poolpage()/kmem_free_poolpage()
rather than doing it here. These functions may use pmap hooks to
provide alternate methods of mapping pool pages.
- pread() (#173) and pwrite() (#174), which are defined by XPG4.2. System
call numbers match Solaris.
- preadv() (#289) and pwritev() (#290), which are the positional cousins
of readv() and writev(), but not defined by any standard.
* we already have the vnode interlock, so vref() should not ask for it again.
* we call VOP_RECLAIM/VOP_INACTIVE(), which shouldn't be duplicated in vrele().
the new proc structure when performing a fork. This makes it much
easier to abort a fork operation and return an error if we run out
of KVA space.
The U-area pages are still wired down in {,u}vm_fork(), as before.
readlink() from type `int' to type `size_t'. This isn't an ABI change, since
the calling convention of our only LP64 platform (the Alpha) already promotes
this argument to a `long'.
This may not be the final action on this matter; readlink() still returns
an `int', which may change in a future revision of the standard.
the kernel's pmap, since proc0 (and other that share its address space)
are kernel-only processes, and should never contain userspace mappings.
This makes it easier to detect errors, like entering user mappings
for kernel processes, in pmap modules, and makes some sense, considering
that kernel processes are really just "thread contexts" for the kernel.
again - the facility required in this context would be a filesystem-specific
super-user determination, which is not available yet. Also, add some
clarification to a comment.
vectors; defer that to vfs_opv_init().
Change the interface to vfs_opv_init() and export it; it now takes a
pointer to an array of vnodeopv_desc *'s to initialize. Allocate
the vnode operations vectors here. Called by vfs_attach().
Implement vfs_opv_free(), which deallocates the vnode operations
vectors. Called by vfs_detach().
Change vfsinit() to build the initial vfs_list by traversing the
vfs_list_inital[] table, and vfs_attach()'ing those file systems.
Also, initialize special vnodeopv_descs (dead, fifo, spec) which
are not associated with any particular file system.
change_owner().
* Change the semantics of chown(), fchown() and lchown(): when requesting a
change of the owner of a file, clear the set-user-id bit; analogous behaviour
for group changes.
* Since the above is a violation of the semantics specified by POSIX and
X/Open, add corresponding compatibility syscalls: __posix_chown(),
__posix_fchown(), __posix_lchown(). (Neither fchown() nor lchown() is
specified by POSIX; the prefix is intended to reflect the semantics.)
* Rename posix_rename() to __posix_rename() to follow the above convention.
(thus causing s_leader to become NULL) by storing the session ID separately
in the session structure. Export the session ID to userspace in the
eproc structure.
Submitted by Tom Proett <proett@nas.nasa.gov>.
UVM was written by chuck cranor <chuck@maria.wustl.edu>, with some
minor portions derived from the old Mach code. i provided some help
getting swap and paging working, and other bug fixes/ideas. chuck
silvers <chuq@chuq.com> also provided some other fixes.
this is the rest of the MI portion changes.
this will be KNF'd shortly. :-)
down "Data modified on freelist" and "muliple free" problems.
The log is activated by the MALLOCLOG option, and the size of the
event ring buffer is controlable via the MALLOGLOGSIZE option (default
is 100000 entries).
From Chris Demetriou, cleaned up a little by me per suggestions in the
e-mail from Chris that contained the code.
called "MACHINE_NEW_NONCONGIG". this is required for UVM, the new VM
system (also written by chuck) that is coming soon. adds new functions:
vm_page_physload() -- tell the VM system about an area of memory.
vm_physseg_find() -- returns index in vm_physmem array that this
address is in.
and several new versions of old functions/macros defined in vm_page.h.
this is the MI portion. sparc, and then later i386 portions to come.
all other ports need to change to this ASAP! (alpha is already being
worked on)
enabled with the LOCAL_CREDS socket option on the listener. Semantics are
similar to BSD/OS's:
- Creds are available with first data on SOCK_STREAM, and with every datagram
on SOCK_DGRAM.
- It is not possible to forge credentials.
Different in that:
- Different credential data structure (ours does not rely on the format
of internal kernel data structures, and does not pass the login name).
- We can pass creds and file descriptors at the same time (this does not
work in BSD/OS).
Luke Mewburn <lukem@netbsd.org> gets credit for inspiring me to implement
this. :-)