Commit Graph

26 Commits

Author SHA1 Message Date
ad 822f68cc07 If DEBUG is enabled, drop kpreempt_pri to zero. It means that every
wakeup will cause a kernel preemption, simulating massive concurrency.

Proposed on tech-kern@.
2009-03-02 21:17:29 +00:00
rmind 3de401ae19 Make sched_getrq() inline (gcc does not optimize it), avoids call. 2009-02-17 22:00:14 +00:00
rmind d1efa8f729 - Avoid calling sched_catchlwp() if CPUs have different processor-sets.
- sched_takecpu: check for psid earlier (be more strict).

PR/40419.
2009-01-18 05:07:51 +00:00
ad 7ad98abc71 - Wrap sys/cpu.h contents in _LOCORE.
- Add a RESCHED_LAZY flag and use instead of zero.
2008-12-02 17:57:32 +00:00
rmind 337b081fed - Replace lwp_t::l_sched_info with union: pointer and timeslice.
- Change minimal time-quantum to ~20 ms.
- Thus remove unneeded pool in M2, and unused sched_lwp_exit().
- Do not increase l_slptime twice for SCHED_4BSD (regression fix).
2008-10-07 09:48:27 +00:00
rmind ae626d791a - Schedule bound threads even if CPU is offline. Might be revisited later,
when decision what to do with already bound threads will be made.
- Do not allow to assign offline CPU to the processor-set.

Quick fix for PR/39349.
2008-09-30 16:28:45 +00:00
rmind d489642431 sched_migratable: add KASSERT since this function cannot be called
without lock held now.  Few cosmetic changes, while here.
2008-07-14 01:18:10 +00:00
christos 1d875fc75f Adjust to separate kcpuset_t and cpuset_t. 2008-06-22 00:06:36 +00:00
rmind 481ae1556f - Add general cpuset macros.
- Use kcpuset name for kernel-only functions.
- Use cpuid_t to specify CPU ID.
- Unify all cpuset users.

API is expected to be stable now.
2008-06-16 01:41:20 +00:00
christos f30b5785d5 Don't expose struct cpuset, share the l_affinity flag and only allocate it
if we need to. This is not a compatible change, but the syscalls are new
enough and they don't need to be versioned. Approved by rmind.
2008-06-15 20:32:57 +00:00
ad 13cf4bcc55 PR kern/38663 Kernel preemption can't be enabled on x86 because of amd64
FPU handling

Remove ifdef(i386), kernel preemption works on amd64 now.
2008-05-30 12:18:14 +00:00
rmind a68758f8bd sched_idle: initialise 'tci' to NULL, avoids compiler warning. 2008-05-30 08:31:42 +00:00
rmind 29170d3854 Simplifcation for running LWP migration. Removes double-locking in
mi_switch(), migration for LSONPROC is now performed via idle loop.
Handles/fixes on-CPU case in lwp_migrate(), misc.

Closes PR/38169, idea of migration via idle loop by Andrew Doran.
2008-05-29 22:33:27 +00:00
ad 5831c8ac63 Pull in sys/evcnt.h. 2008-05-27 22:05:50 +00:00
ad f79b59f700 #ifdef strikes again 2008-05-27 21:36:03 +00:00
ad 4c634c7155 Sigh. The previous change did bad things to MySQL sysbench. Continue stealing
jobs from sched_nextlwp, but also do it in the idle loop. In sched_nextlwp
use trylock, in the idle LWP try harder.
2008-05-27 19:05:52 +00:00
ad 81fa379a0b PR kern/38707 scheduler related deadlock during build.sh
- Fix performance regression inroduced by the workaround by making job
  stealing a lot simpler: if the local run queue is empty, let the CPU enter
  the idle loop. In the idle loop, try to steal a job from another CPU's run
  queue if we are idle. If we succeed, re-enter mi_switch() immediatley to
  dispatch the job.

- When stealing jobs, consider a remote CPU to have one less job in its
  queue if it's currently in the idle loop. It will dispatch the job soon,
  so there's no point sloshing it about.

- Introduce a few event counters to monitor what's happening with the run
  queues.

- Revert the idle CPU bitmap change. It's pointless considering NUMA.
2008-05-27 14:48:52 +00:00
ad c7615c48c8 PR kern/38707 scheduler related deadlock during build.sh
Fail sched_catchlwp() if mutex_tryenter() on the remote CPU's state fails.
Seems to work around the issue described in this PR.

XXX Stealing jobs from remote CPUs could probably be moved into the idle
loop, making the locking quite a bit simpler.
2008-05-25 23:46:55 +00:00
ad 697d5e2cd4 PR kern/38663 Kernel preemption can't be enabled on x86 because of amd64
FPU handling

Ugly hack until the amd64 fpu handling is working (which should be soon):
enable kernel preemption on i386.
2008-05-21 15:41:03 +00:00
ad ce7cbbfb63 Back out unintentional change. 2008-05-20 19:21:23 +00:00
ad 61270d54f1 If autoloading a module, don't consider the current working directory. 2008-05-20 19:20:38 +00:00
ad 245f0726ac Reduce ifdefs due to MULTIPROCESSOR slightly. 2008-05-19 17:06:02 +00:00
rmind 5f701aa0a3 - Make periodical balancing mandatory.
- Fix priority raising in M2 (broken after making runqueues mandatory).
2008-05-19 12:48:54 +00:00
rmind 5d285c31ff Set minimal count of LWPs for catching to 1, and cache-hotness time to ~3ms 2008-04-30 09:17:12 +00:00
ad ddeba2439c Ignore processes with PK_MARKER set. 2008-04-29 15:51:23 +00:00
rmind 1942fc2548 Split the runqueue management code into the separate file.
OK by <ad>.
2008-04-29 14:35:20 +00:00