This will be used to support TLS. The MD method must match the ELF TLS spec
for that CPU architecture (if there is a spec).
At this time it is only implemented for i386, where it means setting the
per-thread base address for %gs. Please implement this for your platform!
we no longer need to guard against access from hardware interrupt handlers.
Additionally, if cloning a process with CLONE_SIGHAND, arrange to have the
child process share the parent's lock so that signal state may be kept in
sync. Partially addresses PR kern/37437.
proclist_mutex and proclist_lock into a single adaptive mutex (proc_lock).
Implications:
- Inspecting process state requires thread context, so signals can no longer
be sent from a hardware interrupt handler. Signal activity must be
deferred to a soft interrupt or kthread.
- As the proc state locking is simplified, it's now safe to take exit()
and wait() out from under kernel_lock.
- The system spends less time at IPL_SCHED, and there is less lock activity.
existing behaviour: the unsleep method unlocks and wakes the swapper if
needs be. If false, the caller is doing a batch operation and will take
care of that later. This is kind of ugly, but it's difficult for the caller
to know which lock to release in some situations.
int foo(struct lwp *l, void *v, register_t *retval)
to:
int foo(struct lwp *l, const struct foo_args *uap, register_t *retval)
Fixup compat code to not write into 'uap' and (in some cases) to actually
pass a correctly formatted 'uap' structure with the right name to the
next routine.
A few 'compat' routines that just call standard ones have been deleted.
All the 'compat' code compiles (along with the kernels required to test
build it).
98% done by automated scripts.
tech-kern:
- Invert priority space so that zero is the lowest priority. Rearrange
number and type of priority levels into bands. Add new bands like
'kernel real time'.
- Ignore the priority level passed to tsleep. Compute priority for
sleep dynamically.
- For SCHED_4BSD, make priority adjustment per-LWP, not per-process.
by yamt@.
- Introduce SOBJ_SLEEPQ_LIFO, and use for LWPs sleeping via _lwp_park.
libpthread enqueues most waiters in LIFO order to try and wake LWPs that
ran recently, since their working set is more likely to be in cache.
Matching the order of insertion reduces the time spent searching queues
in the kernel.
- Do not boost the priority of LWPs sleeping in _lwp_park, just let them
sleep at their user priority level. LWPs waiting for some I/O event in
the kernel still wait with kernel priority and get woken more quickly.
This needs more evaluation and is to be revisited, but the effect on a
variety of benchmarks is positive.
- When waking LWPs, do not send an IPI to remote CPUs or arrange for the
current LWP to be preempted unless (a) the thread being awoken has kernel
priority and has higher priority than the currently running thread or (b)
the remote CPU is idle.
lwp_exit() before suspending. In such case, LWP might be already freed after
cv_wait_sig() and checking the list of LWPs via lwp_find() is necessary.
Possible problem catched by Andrew Doran.
1 microsecond into the future, the thread could enter an untimed sleep.
- Change the signature of _lwp_park() to accept an lwpid_t and second
hint pointer, but do so in a way that remains compatible with older
pthread libraries. This can be used to wake another thread before the
calling thread goes asleep, saving at least one syscall + involuntary
context switch. This turns out to be a fairly large win on the condvar
benchmarks that I have tried.
- Mark some more syscalls MP safe.
from doc/BRANCHES:
idle lwp, and some changes depending on it.
1. separate context switching and thread scheduling.
(cf. gmcgarry_ctxsw)
2. implement idle lwp.
3. clean up related MD/MI interfaces.
4. make scheduler(s) modular.
- Better detect simple cycles of threads calling _lwp_wait and return
EDEADLK. Does not handle deeper cycles like t1 -> t2 -> t3 -> t1.
- If there are multiple threads in _lwp_wait, then make sure that
targeted waits take precedence over waits for any LWP to exit.
- When checking for deadlock, also count the number of zombies currently
in the process as potentially reapable. Whenever a zombie is murdered,
kick all waiters to make them check again for deadlock.
- Add more comments.
Also, while here:
- LOCK_ASSERT -> KASSERT in some places
- lwp_free: change boolean arguments to type 'bool'.
- proc_free: let lwp_free spin waiting for the last LWP to exit, there's
no reason to do it here.
- Don't bother sorting the sleep queues, since user space controls the
order of removal.
- Change setrunnable(t) to lwp_unsleep(t). No functional change from the
perspective of user applications.
- Minor cosmetic changes.
so it's not worth counting.
- lwp_wakeup: set LW_UNPARKED on the target. Ensures that _lwp_park will
always be awoken even if another system call eats the wakeup, e.g. as a
result of an intervening signal. To deal with this correctly for other
system calls will require a different approach.
- _lwp_unpark, _lwp_unpark_all: use setrunnable if the LWP is not parked
on the same sync queue: (1) simplifies the code a bit as there no point
doing anything special for this case (2) makes it possible for p_smutex
to be replaced by p_mutex and (3) restores the guarantee that the 'hint'
argument really is just a hint.
with newlock2 merge:
Replace the Mach-derived boolean_t type with the C99 bool type. A
future commit will replace use of TRUE and FALSE with true and false.
P_*/L_* naming convention, and rename the in-kernel flags to avoid
conflict. (P_ -> PK_, L_ -> LW_ ). Add back the (now unused) LSDEAD
constant.
Restores source compatibility with pre-newlock2 tools like ps or top.
Reviewed by Andrew Doran.