mi_switch(), migration for LSONPROC is now performed via idle loop.
Handles/fixes on-CPU case in lwp_migrate(), misc.
Closes PR/38169, idea of migration via idle loop by Andrew Doran.
proclist_mutex and proclist_lock into a single adaptive mutex (proc_lock).
Implications:
- Inspecting process state requires thread context, so signals can no longer
be sent from a hardware interrupt handler. Signal activity must be
deferred to a soft interrupt or kthread.
- As the proc state locking is simplified, it's now safe to take exit()
and wait() out from under kernel_lock.
- The system spends less time at IPL_SCHED, and there is less lock activity.
mandatory. Remove the 4BSD run queue code. Effects:
- Pluggable scheduler is only responsible for co-ordinating timeshared jobs.
- All systems run with per-CPU run queues.
- 4BSD scheduler gets processor sets / affinity.
- 4BSD scheduler gets a significant peformance boost on some workloads.
Discussed on tech-kern@.
testing by rmind@ and myself.
Which approach to use is still being discussed, but I would like to get
this out of my working tree. If we decide to use a different approach
there is no problem with revisiting this.
cpu_offline. Use this on amd64/i386 to force a FPU save. As this was
triggered by npxsave_cpu/fpusave_cpu not working for a different CPU,
remove the cpu_info argument and adjust npxsave_*/fpusave_* to use bool
for the save.
OK ad@
Add schedctl(8) - a program to control scheduling of processes and threads.
Notes:
- This is supported only by SCHED_M2;
- Migration of LWP mechanism will be revisited;
Proposed on: <tech-kern>. Reviewed by: <ad>.
tech-kern:
- Invert priority space so that zero is the lowest priority. Rearrange
number and type of priority levels into bands. Add new bands like
'kernel real time'.
- Ignore the priority level passed to tsleep. Compute priority for
sleep dynamically.
- For SCHED_4BSD, make priority adjustment per-LWP, not per-process.
- Fix inverted logic with r_mcount in M2;
- setrunnable: perform sched_takecpu() when making the LWP runnable;
- setrunnable: l_mutex cannot be spc_mutex here;
This makes cpuctl(8) work with SCHED_M2.
OK by <ad>.
For now these just pass through to the current softintr code.
(The naming is different to allow softint/softintr to co-exist for a while.
I'm hoping that should make it easier to transition.)
- Make structures CPU-cache friendly, as suggested and explained
by Andrew Doran. CACHE_LINE_SIZE definition is invented.
- Use current CPU if NULL is passed to the workqueue_enqueue().
- Implemented MI CPU index, which could be used as an index of array.
Removed linked-lists usage for work queues.
The roundup2() function avoids division, but works only with power of 2.
Reviewed by: <ad>, <yamt>, <tech-kern>
from doc/BRANCHES:
idle lwp, and some changes depending on it.
1. separate context switching and thread scheduling.
(cf. gmcgarry_ctxsw)
2. implement idle lwp.
3. clean up related MD/MI interfaces.
4. make scheduler(s) modular.