Commit Graph

264 Commits

Author SHA1 Message Date
ad acf9701a7e kpreempt: fix another bug, uintptr_t -> bool truncation. 2009-04-16 21:19:23 +00:00
rmind 523acc7d68 Avoid few #ifdef KSTACK_CHECK_MAGIC. 2009-04-16 00:17:19 +00:00
yamt f0cdb5ac8d kpreempt: report a failure of cpu_kpreempt_enter. otherwise x86 trap()
loops infinitely.  PR/41202.
2009-04-15 11:44:20 +00:00
rmind f70325ee02 - kpreempt_disabled: constify l.
- Few predictions.
- KNF.
2009-03-28 21:43:16 +00:00
ad 2015b56df5 Warn once and no more about backwards monotonic clock. 2009-02-04 21:29:54 +00:00
rmind a487ff992f sched_pstats: add few checks to catch the problem. OK by <ad>. 2009-01-28 22:59:46 +00:00
ad 74302d0fab Redo previous. Don't count deferrals due to raised IPL. It's not that
meaningful.
2008-12-21 13:26:58 +00:00
ad 82ae73e0b6 Don't increment the 'kpreempt defer: IPL' counter if a preemption is pending
and we try to process it from interrupt context. We can't process it, and
will be handled at EOI anyway. Can happen when kernel_lock is released.
2008-12-20 23:06:14 +00:00
ad 24da1f6ca4 PR kern/36183 problem with ptrace and multithreaded processes
Fix the famous "gdb + threads = panic" problem.
Also, fix another revivesa merge botch.
2008-12-13 20:43:38 +00:00
skrll 1041d3756c s/process/LWP/ in comments where appropriate. 2008-11-15 10:54:32 +00:00
smb 2b64d5012d Fix a type -- a comment started with /m instead of /* .... 2008-10-29 21:35:27 +00:00
skrll f20d7f011d Typo in comment. 2008-10-29 20:18:20 +00:00
wrstuden fc7511b00e Merge wrstuden-revivesa into HEAD. 2008-10-15 06:51:17 +00:00
uwe 4691dacd78 Declare lwp_exit_switchaway() __dead. Add infinite loop at the end of
lwp_exit_switchaway() to convince gcc that cpu_switchto(NULL, ...) is
really not going to return in that case.  Exposed by gcc4.3.

Reported on tech-kern by Alexander Shishkin.
2008-07-25 00:48:59 +00:00
rmind 73f3b7bb31 Remove outdated comments, and historical CCPU_SHIFT. Make resched_cpu static,
const-ify ccpu.  Note: resched_cpu is not correct, should be revisited.

OK by <ad>.
2008-07-02 19:44:10 +00:00
rmind 61fc86b29b Remove locking of p_stmutex from sched_pstats(), protect l_pctcpu with p_lock,
and make l_cpticks lock-less.  Should fix PR/38296.

Reviewed (slightly different version) by <ad>.
2008-07-02 19:38:37 +00:00
ad 2feabc3836 PR kern/38812 race between lwp_exit_switchaway and exit1/coredump
Move the LWP RUNNING and TIMEINTR flags into the thread-private flag word.
2008-05-31 21:26:01 +00:00
ad e98d2c1016 lwp_exit_switchaway: set l_lwpctl->lc_curcpu = EXITED, not NONE. 2008-05-29 23:29:59 +00:00
rmind 29170d3854 Simplifcation for running LWP migration. Removes double-locking in
mi_switch(), migration for LSONPROC is now performed via idle loop.
Handles/fixes on-CPU case in lwp_migrate(), misc.

Closes PR/38169, idea of migration via idle loop by Andrew Doran.
2008-05-29 22:33:27 +00:00
ad 208df81d99 Move lwp_exit_switchaway() into kern_synch.c. Instead of always switching
to the idle loop, pick a new LWP from the run queue.
2008-05-27 17:51:17 +00:00
ad 93e0e98369 Take the mutex pointer and waiters count out of sleepq_t: the values can
be or are maintained elsewhere. Now a sleepq_t is just a TAILQ_HEAD.
2008-05-26 12:08:38 +00:00
ad 245f0726ac Reduce ifdefs due to MULTIPROCESSOR slightly. 2008-05-19 17:06:02 +00:00
rmind 5f701aa0a3 - Make periodical balancing mandatory.
- Fix priority raising in M2 (broken after making runqueues mandatory).
2008-05-19 12:48:54 +00:00
ad e1df701f0d Avoid unneeded AST faults. 2008-04-30 12:44:27 +00:00
ad 10f791f083 kpreempt: fix a block that should only have compiled as C++... I gues
there is a parsing bug in gcc that let it through.
2008-04-30 00:52:22 +00:00
ad 0329609eb4 Reapply 1.235 which was lost with a subsequent merge. 2008-04-30 00:30:56 +00:00
ad ddeba2439c Ignore processes with PK_MARKER set. 2008-04-29 15:51:23 +00:00
rmind 1942fc2548 Split the runqueue management code into the separate file.
OK by <ad>.
2008-04-29 14:35:20 +00:00
ad 0910800372 Suspended LWPs are no longer created with l_mutex == spc_mutex. Remove
workaround in setrunnable. Fixes PR kern/38222.
2008-04-29 13:56:14 +00:00
ad ca24210d8c EVCNT_TYPE_INTR -> EVCNT_TYPE_MISC 2008-04-28 22:15:47 +00:00
ad b96eb5aec9 Make the preemption switch a __HAVE instead of an option. 2008-04-28 21:17:16 +00:00
martin ce099b4099 Remove clause 3 and 4 from TNF licenses 2008-04-28 20:22:51 +00:00
ad 499f0dfad6 Even if PREEMPTION is defined, disable it by default until any preemption
safety issues have been ironed out. Can be enabled at runtime with sysctl.
2008-04-28 15:38:03 +00:00
ad 4c7ba24481 Add MI code to support in-kernel preemption. Preemption is deferred by
one of the following:

- Holding kernel_lock (indicating that the code is not MT safe).
- Bracketing critical sections with kpreempt_disable/kpreempt_enable.
- Holding the interrupt priority level above IPL_NONE.

Statistics on kernel preemption are reported via event counters, and
where preemption is deferred for some reason, it's also reported via
lockstat. The LWP priority at which preemption is triggered is tuneable
via sysctl.
2008-04-28 15:36:01 +00:00
ad 27168d9d58 - Rename crit_enter/crit_exit to kpreempt_disable/kpreempt_enable.
DragonflyBSD uses the crit names for something quite different.
- Add a kpreempt_disabled function for diagnostic assertions.
- Add inline versions of kpreempt_enable/kpreempt_disable for primitives.
- Make some more changes for preemption safety to the x86 pmap.
2008-04-27 11:37:48 +00:00
ad 284c2b9aef Merge proc::p_mutex and proc::p_smutex into a single adaptive mutex, since
we no longer need to guard against access from hardware interrupt handlers.

Additionally, if cloning a process with CLONE_SIGHAND, arrange to have the
child process share the parent's lock so that signal state may be kept in
sync. Partially addresses PR kern/37437.
2008-04-24 18:39:20 +00:00
ad 6d70f903e6 Network protocol interrupts can now block on locks, so merge the globals
proclist_mutex and proclist_lock into a single adaptive mutex (proc_lock).
Implications:

- Inspecting process state requires thread context, so signals can no longer
  be sent from a hardware interrupt handler. Signal activity must be
  deferred to a soft interrupt or kthread.

- As the proc state locking is simplified, it's now safe to take exit()
  and wait() out from under kernel_lock.

- The system spends less time at IPL_SCHED, and there is less lock activity.
2008-04-24 15:35:27 +00:00
yamt 7ab55e0ff2 sched_print_runqueue: add __printf__ attribute to the 'pr' argument. 2008-04-13 22:54:19 +00:00
yamt 3cd40e9f41 sched_print_runqueue: fix printf formats. 2008-04-13 22:53:31 +00:00
dogcow 7bcb554c5f Since nobody else has fixed it yet: fix case of GDB && !MULTIPROCESSOR. 2008-04-13 16:22:14 +00:00
ad b60416c0e2 Move the LW_BOUND flag into the thread-private flag word. It can be tested
by other threads/CPUs but that is only done when the LWP is known to be in a
quiescent state (for example, on a run queue).
2008-04-12 17:16:09 +00:00
ad 06e0894e76 Take the run queue management code from the M2 scheduler, and make it
mandatory. Remove the 4BSD run queue code. Effects:

- Pluggable scheduler is only responsible for co-ordinating timeshared jobs.
- All systems run with per-CPU run queues.
- 4BSD scheduler gets processor sets / affinity.
- 4BSD scheduler gets a significant peformance boost on some workloads.

Discussed on tech-kern@.
2008-04-12 17:02:08 +00:00
ad 42bc09155e yield: don't drop priority to zero. libpthread doesn't make much use of
this any more but applications do and it now pessimizes benchmarks.
2008-04-02 17:38:16 +00:00
ad c42a4d1422 Add a boolean parameter to syncobj_t::sobj_unsleep. If true we want the
existing behaviour: the unsleep method unlocks and wakes the swapper if
needs be. If false, the caller is doing a batch operation and will take
care of that later. This is kind of ugly, but it's difficult for the caller
to know which lock to release in some situations.
2008-03-17 16:54:51 +00:00
rmind ae5c2ec2bd Workaround the case, when l_cpu changes to l_target_cpu, and causes
the locking against oneself. Will be revisited. OK by <ad>.
2008-03-16 23:11:30 +00:00
ad 727b89a296 Add a preemption counter to lwpctl_t, to allow user threads to detect that
they have been preempted.
2008-03-12 11:00:43 +00:00
ad a108a15f5d Make context switch + syscall counters optionally per-CPU and accumulate
in schedclock() at "about 16 hz".
2008-03-11 02:24:43 +00:00
ad 60c1b8843d Make schedstate_percpu::spc_lwplock an exernally allocated item. Remove
the hacks in sparc/cpu.c to reinitialize it. This should be in its own
cache line but that's another change.
2008-02-14 14:26:57 +00:00
rmind 5c71a4d49f Implementation of processor-sets, affinity and POSIX real-time extensions.
Add schedctl(8) - a program to control scheduling of processes and threads.

Notes:
- This is supported only by SCHED_M2;
- Migration of LWP mechanism will be revisited;

Proposed on: <tech-kern>. Reviewed by: <ad>.
2008-01-15 03:37:10 +00:00
ad 0664a0459b Start detangling lock.h from intr.h. This is likely to cause short term
breakage, but the mess of dependencies has been regularly breaking the
build recently anyhow.
2008-01-04 21:17:40 +00:00