Commit Graph

44 Commits

Author SHA1 Message Date
christos
2210ed24e7 provide curthread for dtrace 2015-10-07 00:32:34 +00:00
wiz
87561671c1 defintion -> definition 2014-08-03 19:14:24 +00:00
pooka
4f6fb3bf35 Ensure that the top level sysctl nodes (kern, vfs, net, ...) exist before
the sysctl link sets are processed, and remove redundancy.

Shaves >13kB off of an amd64 GENERIC, not to mention >1k duplicate
lines of code.
2014-02-25 18:30:08 +00:00
rmind
df64447ca6 Remove cpu_queue (and thus eleminate another use of CIRCLEQ) by replacing
its uses with cpu_infos array.  Extra testing by christos@.
2013-11-24 21:58:38 +00:00
christos
e5d5564a4b remove __unused now that it is used. 2013-10-19 19:22:16 +00:00
martin
8afc72d050 cpu_need_resched(ci, type) might not make use of the type argument - mark
the variable declaration accordingly.
2013-10-19 18:42:05 +00:00
yamt
37fc08318c revert rev.1.37 for now.
PR/47634 from Ryo ONODERA.
while i have no idea how this change can break bge,
i don't have hardware and/or time to investigate right now.
2013-03-12 23:16:31 +00:00
yamt
69f842b1d9 - use scaled calculations for avgcount
- sched_balance: account lwp which is currently running
- sched_balance: skip cpus w/o migratable lwps
2013-03-06 11:25:01 +00:00
christos
a67c3c8971 printflike maintenance. 2013-02-09 00:31:21 +00:00
matt
b0d1f89948 Change KASSERT to KASSERTMSG 2012-08-30 02:25:35 +00:00
para
05f35f5342 change sched_upreempt_pri default to 0 as discussed on tech-kern@
should improve interactive performance on SMP machines
as user preemption happens immediately in x-cpu wakeup case now
2012-02-23 12:24:05 +00:00
yamt
97e20eb0a1 comments 2011-12-02 12:31:03 +00:00
rmind
501dd321fb Remove LW_AFFINITY flag and fix some bugs affinity mask handling. 2011-08-07 21:13:05 +00:00
rmind
52b220e91d Add kcpuset(9) - a reworked dynamic CPU set implementation for kernel.
Suitable for use during the early boot.  MD and other implementations
should be replaced with this interface.

Discussed on: tech-kern@
2011-08-07 13:33:01 +00:00
yamt
b1521a3612 remove redundant checks of PK_MARKER. 2010-03-03 00:47:30 +00:00
mrg
efc854cf68 introduce a new function that returns a unique string for each cpu:
char *cpu_name(struct cpu_info *);

and use it when setting up the runq event counters, avoiding an 8 byte
kmem(4) allocation for each cpu.  there are more places the cpuname is
used that can be converted to using this new interface, but that can
and will be done as future work.

as discussed with rmind.
2010-01-13 01:57:17 +00:00
rmind
65265dedb7 sched_catchlwp: fix the case when other CPU might see curlwp->l_cpu != curcpu()
while LWP is finishing context switch.  Should fix PR/42539, tested by martin@.
2009-12-30 23:49:59 +00:00
rmind
40cf6f3659 Remove uarea swap-out functionality:
- Addresses the issue described in PR/38828.
- Some simplification in threading and sleepq subsystems.
- Eliminates pmap_collect() and, as a side note, allows pmap optimisations.
- Eliminates XS_CTL_DATA_ONSTACK in scsipi code.
- Avoids few scans on LWP list and thus potentially long holds of proc_lock.
- Cuts ~1.5k lines of code.  Reduces amd64 kernel size by ~4k.
- Removes __SWAP_BROKEN cases.

Tested on x86, mips, acorn32 (thanks <mpumford>) and partly tested on
acorn26 (thanks to <bjh21>).

Discussed on <tech-kern>, reviewed by <ad>.
2009-10-21 21:11:57 +00:00
ad
822f68cc07 If DEBUG is enabled, drop kpreempt_pri to zero. It means that every
wakeup will cause a kernel preemption, simulating massive concurrency.

Proposed on tech-kern@.
2009-03-02 21:17:29 +00:00
rmind
3de401ae19 Make sched_getrq() inline (gcc does not optimize it), avoids call. 2009-02-17 22:00:14 +00:00
rmind
d1efa8f729 - Avoid calling sched_catchlwp() if CPUs have different processor-sets.
- sched_takecpu: check for psid earlier (be more strict).

PR/40419.
2009-01-18 05:07:51 +00:00
ad
7ad98abc71 - Wrap sys/cpu.h contents in _LOCORE.
- Add a RESCHED_LAZY flag and use instead of zero.
2008-12-02 17:57:32 +00:00
rmind
337b081fed - Replace lwp_t::l_sched_info with union: pointer and timeslice.
- Change minimal time-quantum to ~20 ms.
- Thus remove unneeded pool in M2, and unused sched_lwp_exit().
- Do not increase l_slptime twice for SCHED_4BSD (regression fix).
2008-10-07 09:48:27 +00:00
rmind
ae626d791a - Schedule bound threads even if CPU is offline. Might be revisited later,
when decision what to do with already bound threads will be made.
- Do not allow to assign offline CPU to the processor-set.

Quick fix for PR/39349.
2008-09-30 16:28:45 +00:00
rmind
d489642431 sched_migratable: add KASSERT since this function cannot be called
without lock held now.  Few cosmetic changes, while here.
2008-07-14 01:18:10 +00:00
christos
1d875fc75f Adjust to separate kcpuset_t and cpuset_t. 2008-06-22 00:06:36 +00:00
rmind
481ae1556f - Add general cpuset macros.
- Use kcpuset name for kernel-only functions.
- Use cpuid_t to specify CPU ID.
- Unify all cpuset users.

API is expected to be stable now.
2008-06-16 01:41:20 +00:00
christos
f30b5785d5 Don't expose struct cpuset, share the l_affinity flag and only allocate it
if we need to. This is not a compatible change, but the syscalls are new
enough and they don't need to be versioned. Approved by rmind.
2008-06-15 20:32:57 +00:00
ad
13cf4bcc55 PR kern/38663 Kernel preemption can't be enabled on x86 because of amd64
FPU handling

Remove ifdef(i386), kernel preemption works on amd64 now.
2008-05-30 12:18:14 +00:00
rmind
a68758f8bd sched_idle: initialise 'tci' to NULL, avoids compiler warning. 2008-05-30 08:31:42 +00:00
rmind
29170d3854 Simplifcation for running LWP migration. Removes double-locking in
mi_switch(), migration for LSONPROC is now performed via idle loop.
Handles/fixes on-CPU case in lwp_migrate(), misc.

Closes PR/38169, idea of migration via idle loop by Andrew Doran.
2008-05-29 22:33:27 +00:00
ad
5831c8ac63 Pull in sys/evcnt.h. 2008-05-27 22:05:50 +00:00
ad
f79b59f700 #ifdef strikes again 2008-05-27 21:36:03 +00:00
ad
4c634c7155 Sigh. The previous change did bad things to MySQL sysbench. Continue stealing
jobs from sched_nextlwp, but also do it in the idle loop. In sched_nextlwp
use trylock, in the idle LWP try harder.
2008-05-27 19:05:52 +00:00
ad
81fa379a0b PR kern/38707 scheduler related deadlock during build.sh
- Fix performance regression inroduced by the workaround by making job
  stealing a lot simpler: if the local run queue is empty, let the CPU enter
  the idle loop. In the idle loop, try to steal a job from another CPU's run
  queue if we are idle. If we succeed, re-enter mi_switch() immediatley to
  dispatch the job.

- When stealing jobs, consider a remote CPU to have one less job in its
  queue if it's currently in the idle loop. It will dispatch the job soon,
  so there's no point sloshing it about.

- Introduce a few event counters to monitor what's happening with the run
  queues.

- Revert the idle CPU bitmap change. It's pointless considering NUMA.
2008-05-27 14:48:52 +00:00
ad
c7615c48c8 PR kern/38707 scheduler related deadlock during build.sh
Fail sched_catchlwp() if mutex_tryenter() on the remote CPU's state fails.
Seems to work around the issue described in this PR.

XXX Stealing jobs from remote CPUs could probably be moved into the idle
loop, making the locking quite a bit simpler.
2008-05-25 23:46:55 +00:00
ad
697d5e2cd4 PR kern/38663 Kernel preemption can't be enabled on x86 because of amd64
FPU handling

Ugly hack until the amd64 fpu handling is working (which should be soon):
enable kernel preemption on i386.
2008-05-21 15:41:03 +00:00
ad
ce7cbbfb63 Back out unintentional change. 2008-05-20 19:21:23 +00:00
ad
61270d54f1 If autoloading a module, don't consider the current working directory. 2008-05-20 19:20:38 +00:00
ad
245f0726ac Reduce ifdefs due to MULTIPROCESSOR slightly. 2008-05-19 17:06:02 +00:00
rmind
5f701aa0a3 - Make periodical balancing mandatory.
- Fix priority raising in M2 (broken after making runqueues mandatory).
2008-05-19 12:48:54 +00:00
rmind
5d285c31ff Set minimal count of LWPs for catching to 1, and cache-hotness time to ~3ms 2008-04-30 09:17:12 +00:00
ad
ddeba2439c Ignore processes with PK_MARKER set. 2008-04-29 15:51:23 +00:00
rmind
1942fc2548 Split the runqueue management code into the separate file.
OK by <ad>.
2008-04-29 14:35:20 +00:00