Commit Graph

22 Commits

Author SHA1 Message Date
yamt
0436400c70 set LP_RUNNING when starting lwp0 and idle lwps.
add assertions.
2009-07-19 10:11:55 +00:00
ad
5b4feac126 idle_loop: explicitly go to spl0() to sidestep potential MD bugs. 2009-06-28 09:25:05 +00:00
ad
62c877a460 Don't call uvm_pageidlezero() if the CPU is marked offline. 2008-06-11 13:42:02 +00:00
ad
cbbf514e2c - vm_page: put listq, pageq into a union alongside a LIST_ENTRY, so we can
use both types of list.

- Make page coloring and idle zero state per-CPU.

- Maintain per-CPU page freelists. When freeing, put pages onto the local
  CPU's lists and the global lists. When allocating, prefer to take pages
  from the local CPU. If none are available take from the global list as
  done now. Proposed on tech-kern@.
2008-06-04 12:45:28 +00:00
rmind
29170d3854 Simplifcation for running LWP migration. Removes double-locking in
mi_switch(), migration for LSONPROC is now performed via idle loop.
Handles/fixes on-CPU case in lwp_migrate(), misc.

Closes PR/38169, idea of migration via idle loop by Andrew Doran.
2008-05-29 22:33:27 +00:00
ad
81fa379a0b PR kern/38707 scheduler related deadlock during build.sh
- Fix performance regression inroduced by the workaround by making job
  stealing a lot simpler: if the local run queue is empty, let the CPU enter
  the idle loop. In the idle loop, try to steal a job from another CPU's run
  queue if we are idle. If we succeed, re-enter mi_switch() immediatley to
  dispatch the job.

- When stealing jobs, consider a remote CPU to have one less job in its
  queue if it's currently in the idle loop. It will dispatch the job soon,
  so there's no point sloshing it about.

- Introduce a few event counters to monitor what's happening with the run
  queues.

- Revert the idle CPU bitmap change. It's pointless considering NUMA.
2008-05-27 14:48:52 +00:00
ad
25866fbff7 Set cpu_onproc on entry to the idle loop. 2008-05-24 12:59:06 +00:00
yamt
0e18a54641 fix a comment. 2008-04-26 08:09:30 +00:00
yamt
52c2e613a9 idle_loop: unsigned -> uint32_t to be consistent with the rest of the code.
no functional change.
2008-04-26 08:08:27 +00:00
ad
c2deaa264e xc_broadcast: don't try to run cross calls on CPUs that are not yet running. 2008-04-24 13:56:30 +00:00
ad
61a0a96054 Maintain a bitmap of idle CPUs and add idle_pick() to find an idle CPU
and remove it from the bitmap.
2008-04-04 17:21:22 +00:00
martin
d8788e7fd7 Use cpu index instead of the machine dependend, not very expressive
cpuid when naming user-visible kernel entities.
2008-03-10 22:20:14 +00:00
ad
60c1b8843d Make schedstate_percpu::spc_lwplock an exernally allocated item. Remove
the hacks in sparc/cpu.c to reinitialize it. This should be in its own
cache line but that's another change.
2008-02-14 14:26:57 +00:00
yamt
949e16d902 use binuptime for l_stime/l_rtime. 2007-12-22 01:14:53 +00:00
ad
a67091837e Lock curlwp when updating the start time. 2007-11-15 20:12:25 +00:00
ad
dbd3ed7b2a Remove KERNEL_LOCK_ASSERT_LOCKED, KERNEL_LOCK_ASSERT_UNLOCKED since the
kernel_lock functions can be patched out at runtime now. Assertions are
provided by the existing functions and by LOCKDEBUG_BARRIER.
2007-11-13 22:14:34 +00:00
ad
d831186d55 Merge scheduler changes from the vmlocking branch. All discussed on
tech-kern:

- Invert priority space so that zero is the lowest priority. Rearrange
  number and type of priority levels into bands. Add new bands like
  'kernel real time'.
- Ignore the priority level passed to tsleep. Compute priority for
  sleep dynamically.
- For SCHED_4BSD, make priority adjustment per-LWP, not per-process.
2007-11-06 00:42:39 +00:00
ad
36a1712707 Merge run time accounting changes from the vmlocking branch. These make
the LWP "start time" per-thread instead of per-CPU.
2007-10-08 20:06:17 +00:00
ad
b58e305699 Enter mi_switch() from the idle loop if ci_want_resched is set. If there
are no jobs to run it will clear it while under lock. Should fix idle.
2007-10-01 22:14:23 +00:00
ad
744a92f0f8 Don't depend on uvm_extern.h pulling in proc.h. 2007-07-21 19:06:20 +00:00
ad
88ab7da936 Merge some of the less invasive changes from the vmlocking branch:
- kthread, callout, devsw API changes
- select()/poll() improvements
- miscellaneous MT safety improvements
2007-07-09 20:51:58 +00:00
yamt
f03010953f merge yamt-idlelwp branch. asked by core@. some ports still needs work.
from doc/BRANCHES:

	idle lwp, and some changes depending on it.

	1. separate context switching and thread scheduling.
	   (cf. gmcgarry_ctxsw)
	2. implement idle lwp.
	3. clean up related MD/MI interfaces.
	4. make scheduler(s) modular.
2007-05-17 14:51:11 +00:00