returning directly. This allows other threads to run possible
setting a condition we are waiting on.
Fixes a busyloop condition which could be entered from vfs_unmountall()
where we were waiting for vrele_pending and the vrele thread could
not run since we were hogging the CPU.
some locking and rumpcopy primitives and refactor module building Makefiles
to work with both RUMP and kernel modules. This is first part of adding
support for regular test of zfs on NetBSD to hunt some bugs and make it
stable.
Ok by pooka@.
threads blocking in the kernel automatically exit when the process
exists. However, for the sysproxy case this does not hold.
Typically it's ~harmless, but e.g. in the case of socket binding
following by poll it gets annoying.
Introduce sysproxy procexit, which wakes up all threads blocking
on a condition when a process's communication socket is closed.
The code is a little different from the regular kernel simply
because in a rump kernel l_mutex is not available at all times
(this is because scheduling happens on every kernel entry and exit,
and that path must be kept lockless for any reasonable performance).
Instead, use gating which makes sure all threads are either out of
the cv code or suspended in a well-known state. Then, wake up the
threads and tell them to get the hell out of our galaxy.
is done in rumpuser for simplicity, since on the kernel side things
we assume we have only one pointer of space). As a side-effect,
we can no longer know if the current thread is holding on to a
mutex locked without curlwp context (basically all mutexes inited
outside of mutex_init()). The only thing that called rumpuser_mutex_held()
for a non-kmutex was the giant lock. So, instead implement recursive
locking for the giant lock in the rump kernel and get rid of the
now-unused recursive pthread mutex in the hypercall interface.
interlock. This is applicable in cases where the actual interlock
is the CPU the currently running thread is scheduled on. Borrowing
the scheduler lock as the mutex mandated by pthread_cond_wait()
does away with need to have an additional mutex. This both optimizes
runtime execution and simplifies code, as the extra lock typically
lead to quite some trickeries to avoid the dungeon collapsing due
to zaps from the wand of deadlock.
* support bound kernel threads
* bind softint threads to specific virtual cpus
+ remove now-unnecessary locks from softint code
Now, if we only had MI CPU_INFO_FOREACH() .... (hi rmind ;)
uncouple it from the timespec layout. Also, change return value
to zero for "timeout didn't expire" and non-zero for "timeout
expired". This decouples the interface from errno assignments.
all-sink and make sure each separate thread in rump has its own
lwp. Happy-go-lucky callers will get scheduled a temporary lwp
on entry, while true lwp connoisseurs may request a stable lwp
for their purposes. Some more love may be required later down the
road, but for now different threads will stepping on each others
toes.
for kernel code which has been written to avoid MP contention by
using cpu-local storage (most prominently, select and pool_cache).
Instead of always assuming rump_cpu, the scheduler must now be run
(and unrun) on all entry points into rump. Likewise, rumpuser
unruns and re-runs the scheduler around each potentially blocking
operation. As an optimization, I modified some locking primitives
to try to get the lock without blocking before releasing the cpu.
Also, ltsleep was modified to assume that it is never called without
the biglock held and made to use the biglock as the sleep interlock.
Otherwise there is just too much drama with deadlocks. If some
kernel code wants to call ltsleep without the biglock, then, *snif*,
it's no longer supported and rump and should be modified to support
newstyle locks anyway.
relative time. This prevents drifting. Also, keep track of time
within userspace, so we do not have to make a syscall to get the
clock value. This is approximately 7 times cheaper, but on the
negative side is limited to the clock interrupt frequency.
interrupt thread.
The sleepq implementation required for callouts is horrible, kludged
only for callouts, and generally unacceptable. It needs revisiting,
but I'm not sure yet should rump or kern_timeout be improved. It's
almost untested as of now, but committing this will give me some
maneuvering space while letting application compile.
private non-installed build infrastructure from sys/rump.
breakdown of commit:
* install relevant headers into /usr/include/rump
* build sys/rump/librump/rumpuser and sys/rump/librump/rumpkern
from src/lib and install as librumpuser and librump, respectively
+ this retains the ability to test a librump build with just the
kernel sources at hand
* move sys/rump/fs/lib/libukfs and sys/rump/fs/lib/libp2k to src/lib
for general consumption, they are not kernel-space dwellers anyway
* build and install sys/rump/fs/lib/lib$fs as librumpfs_$fs
* add chapter 3 manual pages for rump, rumpuser, ukfs and p2k
* build and install userspace kernel file system daemons if MKPUFFS=yes
is spexified
* retire fsconsole for now, it will make a comeback with an actually
implemented version shortly
(or at least until wakeup) instead of immediately waking up.
In other words, fix this after it broke when another piece of the
code was fixed. Ain't programming fun?
locking and makes it possible to run file systems which create
threads. It also makes rump file system behaviour better match
file system behaviour in the kernel.