in a case of process exit. Necessary to re-flag all LWPs for exit, as their
state might have changed or new LWPs spawned.
Should fix PR/46168 and PR/46402.
assertion failure (and thus a crash in DIAGNOSTIC kernels). Independently
discovered by YAMAMOTO Takashi and Joel Sing.
To avoid this, introduce a cpu_mcontext_validate() function and move all
sanity checks from cpu_setmcontext() there. Also untangle the netbsd32
compat mess slightly and add a cpu_mcontext32_validate() cousin there.
Add an exhaustive atf test case, based partly on code from Joel Sing.
Should finally fix the remaining open part of PR kern/43903.
- update the linux syscall table for each platform.
- support new-style (NPTL) linux pthreads on all platforms.
clone() with CLONE_THREAD uses 1 process with many LWPs
instead of separate processes.
- move the contents of sys__lwp_setprivate() into a new
lwp_setprivate() and use that everywhere.
- update linux_release[] and linux32_release[] to "2.6.18".
- adjust placement of emul fork/exec/exit hooks as needed
and adjust other emul code to match.
- convert all struct emul definitions to use named initializers.
- change the pid allocator to allow multiple pids to refer to the same proc.
- remove a few fields from struct proc that are no longer needed.
- disable the non-functional "vdso" code in linux32/amd64,
glibc works fine without it.
- fix a race in the futex code where we could miss a wakeup after
a requeue operation.
- redo futex locking to be a little more efficient.
- Addresses the issue described in PR/38828.
- Some simplification in threading and sleepq subsystems.
- Eliminates pmap_collect() and, as a side note, allows pmap optimisations.
- Eliminates XS_CTL_DATA_ONSTACK in scsipi code.
- Avoids few scans on LWP list and thus potentially long holds of proc_lock.
- Cuts ~1.5k lines of code. Reduces amd64 kernel size by ~4k.
- Removes __SWAP_BROKEN cases.
Tested on x86, mips, acorn32 (thanks <mpumford>) and partly tested on
acorn26 (thanks to <bjh21>).
Discussed on <tech-kern>, reviewed by <ad>.
This will be used to support TLS. The MD method must match the ELF TLS spec
for that CPU architecture (if there is a spec).
At this time it is only implemented for i386, where it means setting the
per-thread base address for %gs. Please implement this for your platform!
we no longer need to guard against access from hardware interrupt handlers.
Additionally, if cloning a process with CLONE_SIGHAND, arrange to have the
child process share the parent's lock so that signal state may be kept in
sync. Partially addresses PR kern/37437.
proclist_mutex and proclist_lock into a single adaptive mutex (proc_lock).
Implications:
- Inspecting process state requires thread context, so signals can no longer
be sent from a hardware interrupt handler. Signal activity must be
deferred to a soft interrupt or kthread.
- As the proc state locking is simplified, it's now safe to take exit()
and wait() out from under kernel_lock.
- The system spends less time at IPL_SCHED, and there is less lock activity.
existing behaviour: the unsleep method unlocks and wakes the swapper if
needs be. If false, the caller is doing a batch operation and will take
care of that later. This is kind of ugly, but it's difficult for the caller
to know which lock to release in some situations.
int foo(struct lwp *l, void *v, register_t *retval)
to:
int foo(struct lwp *l, const struct foo_args *uap, register_t *retval)
Fixup compat code to not write into 'uap' and (in some cases) to actually
pass a correctly formatted 'uap' structure with the right name to the
next routine.
A few 'compat' routines that just call standard ones have been deleted.
All the 'compat' code compiles (along with the kernels required to test
build it).
98% done by automated scripts.
tech-kern:
- Invert priority space so that zero is the lowest priority. Rearrange
number and type of priority levels into bands. Add new bands like
'kernel real time'.
- Ignore the priority level passed to tsleep. Compute priority for
sleep dynamically.
- For SCHED_4BSD, make priority adjustment per-LWP, not per-process.
by yamt@.
- Introduce SOBJ_SLEEPQ_LIFO, and use for LWPs sleeping via _lwp_park.
libpthread enqueues most waiters in LIFO order to try and wake LWPs that
ran recently, since their working set is more likely to be in cache.
Matching the order of insertion reduces the time spent searching queues
in the kernel.
- Do not boost the priority of LWPs sleeping in _lwp_park, just let them
sleep at their user priority level. LWPs waiting for some I/O event in
the kernel still wait with kernel priority and get woken more quickly.
This needs more evaluation and is to be revisited, but the effect on a
variety of benchmarks is positive.
- When waking LWPs, do not send an IPI to remote CPUs or arrange for the
current LWP to be preempted unless (a) the thread being awoken has kernel
priority and has higher priority than the currently running thread or (b)
the remote CPU is idle.
lwp_exit() before suspending. In such case, LWP might be already freed after
cv_wait_sig() and checking the list of LWPs via lwp_find() is necessary.
Possible problem catched by Andrew Doran.
1 microsecond into the future, the thread could enter an untimed sleep.
- Change the signature of _lwp_park() to accept an lwpid_t and second
hint pointer, but do so in a way that remains compatible with older
pthread libraries. This can be used to wake another thread before the
calling thread goes asleep, saving at least one syscall + involuntary
context switch. This turns out to be a fairly large win on the condvar
benchmarks that I have tried.
- Mark some more syscalls MP safe.
from doc/BRANCHES:
idle lwp, and some changes depending on it.
1. separate context switching and thread scheduling.
(cf. gmcgarry_ctxsw)
2. implement idle lwp.
3. clean up related MD/MI interfaces.
4. make scheduler(s) modular.
- Better detect simple cycles of threads calling _lwp_wait and return
EDEADLK. Does not handle deeper cycles like t1 -> t2 -> t3 -> t1.
- If there are multiple threads in _lwp_wait, then make sure that
targeted waits take precedence over waits for any LWP to exit.
- When checking for deadlock, also count the number of zombies currently
in the process as potentially reapable. Whenever a zombie is murdered,
kick all waiters to make them check again for deadlock.
- Add more comments.
Also, while here:
- LOCK_ASSERT -> KASSERT in some places
- lwp_free: change boolean arguments to type 'bool'.
- proc_free: let lwp_free spin waiting for the last LWP to exit, there's
no reason to do it here.
- Don't bother sorting the sleep queues, since user space controls the
order of removal.
- Change setrunnable(t) to lwp_unsleep(t). No functional change from the
perspective of user applications.
- Minor cosmetic changes.
so it's not worth counting.
- lwp_wakeup: set LW_UNPARKED on the target. Ensures that _lwp_park will
always be awoken even if another system call eats the wakeup, e.g. as a
result of an intervening signal. To deal with this correctly for other
system calls will require a different approach.
- _lwp_unpark, _lwp_unpark_all: use setrunnable if the LWP is not parked
on the same sync queue: (1) simplifies the code a bit as there no point
doing anything special for this case (2) makes it possible for p_smutex
to be replaced by p_mutex and (3) restores the guarantee that the 'hint'
argument really is just a hint.