Commit Graph

5552 Commits

Author SHA1 Message Date
Pawel Dziepak
683b9bbf07 scheduler: Improve power saving mode, part 2
Consequences of committing & pushing too quickly...
2013-11-20 21:21:31 +01:00
Pawel Dziepak
ecfd444935 scheduler: Improve power saving mode
* Remove possibility to temporarily disable small task packing.
 * When small task packing target gets overloaded continue packing
   threads on another core, but avoid migrating the already packed
   ones.

Scheduler still tends to needlessly migrate threads to another cores
when under heavier load, but it is now much better than before.
2013-11-20 20:52:11 +01:00
Pawel Dziepak
3eb4224bf6 kernel: Make sure mutex::holder is set to a valid value 2013-11-20 17:53:39 +01:00
Pawel Dziepak
57d5d678f2 x86_64: Fix atomic functions 2013-11-20 17:02:51 +01:00
Pawel Dziepak
c4ac37a35e scheduler: Try to pack IRQs in power saving mode 2013-11-20 12:52:05 +01:00
Pawel Dziepak
9c2e74da04 scheduler: Move mode specific logic to separate files 2013-11-20 09:46:59 +01:00
Pawel Dziepak
e2ff9a2865 scheduler: Rebalance IRQs on overloaded cores 2013-11-18 07:05:35 +01:00
Pawel Dziepak
f14e4567e8 kernel: Use CPU topology to distribute IRQs 2013-11-18 05:37:45 +01:00
Pawel Dziepak
d897a478d7 kernel: Allow reassigning IRQs to logical processors 2013-11-18 04:55:25 +01:00
Pawel Dziepak
955c7edec2 kernel: Measure time spent in interrupt handlers 2013-11-18 01:50:37 +01:00
Pawel Dziepak
6a164daad4 kernel: Track load produced by interrupt handlers 2013-11-18 01:17:44 +01:00
Pawel Dziepak
288a2664a2 scheduler: Remove sSchedulerInternalLock
* pin idle threads to their specific CPUs
 * allow scheduler to implement SMP_MSG_RESCHEDULE handler
 * scheduler_set_thread_priority() reworked
 * at reschedule: enqueue old thread after dequeueing the new one
2013-11-13 05:31:58 +01:00
Pawel Dziepak
72e1b394a4 scheduler: Fix gcc2 build 2013-11-13 00:36:48 +01:00
Pawel Dziepak
5f3a65e578 scheduler: Remove sCorePriorityHeap
sCorePriorityHeap was meant to be a temporary solution anyway. Thread
migration and assignment is now entirely based on core load.
2013-11-13 00:01:02 +01:00
Pawel Dziepak
829f836324 scheduler: Minor cleanup 2013-11-12 04:42:12 +01:00
Pawel Dziepak
8818c942dd scheduler: Add {CPU,Core,Package}Entry constructors 2013-11-12 04:26:32 +01:00
Pawel Dziepak
e1c40769d3 scheduler: Atomically access time and load measurements 2013-11-12 04:23:42 +01:00
Pawel Dziepak
d17b71d6b0 scheduler: Reduce false sharing of per-CPU and per-core data 2013-11-11 21:46:18 +01:00
Pawel Dziepak
a1feba678d kernel/undertaker: Make sure the thread isn't running anymore 2013-11-11 21:04:38 +01:00
Pawel Dziepak
7e1c4534df libroot: Add adaptive mutex implementation 2013-11-08 03:37:30 +01:00
Pawel Dziepak
03fb2d8868 kernel: Remove gSchedulerLock
* Thread::scheduler_lock protects thread state, priority, etc.
 * sThreadCreationLock protects thread creation and removal and list of
   threads in team.
 * Team::signal_lock and Team::time_lock protect list of threads in team
   as well.
 * Scheduler uses its own internal locking.
2013-11-08 02:41:26 +01:00
Pawel Dziepak
72addc62e0 kernel: Introduce Thread::time_lock and Team::time_lock 2013-11-07 22:16:36 +01:00
Pawel Dziepak
3519eb334a kernel: Change Thread::team_lock to rw_spinlock 2013-11-07 04:20:59 +01:00
Pawel Dziepak
defee266db kernel: Add read write spinlock implementation 2013-11-07 04:20:32 +01:00
Pawel Dziepak
20ded5c2eb kernel/posix: Do not use thread_block_locked() 2013-11-07 02:06:42 +01:00
Pawel Dziepak
d3e5752b11 scheduler: Performance mode is actually low latency mode 2013-11-07 01:50:20 +01:00
Pawel Dziepak
83983eaf38 kernel: Remove Thread::alarm 2013-11-07 01:40:02 +01:00
Pawel Dziepak
aa4aca0264 kernel: Protect signal data with Team::signal_lock 2013-11-07 01:32:48 +01:00
Pawel Dziepak
73ad2473e7 Remove remaining unnecessary 'volatile' qualifiers 2013-11-06 00:03:07 +01:00
Pawel Dziepak
273f2f38cd kernel: Improve spinlock implementation
atomic_or() and atomic_and() are not supported by x86 are need to be
emulated using CAS. Use atomic_get_and_set() and atomic_set() instead.
2013-11-05 22:47:18 +01:00
Pawel Dziepak
077c84eb27 kernel: atomic_*() functions rework
* No need for the atomically changed variables to be declared as
   volatile.
 * Drop support for atomically getting and setting unaligned data.
 * Introduce atomic_get_and_set[64]() which works the same as
   atomic_set[64]() used to. atomic_set[64]() does not return the
   previous value anymore.
2013-11-05 22:32:59 +01:00
Pawel Dziepak
e7dba861fd kernel: User{Event, Timer}: Use atomic access where necessary 2013-11-05 20:28:25 +01:00
Pawel Dziepak
f4b088a992 kernel: Protect UserTimers with sUserTimerLock 2013-11-05 05:36:05 +01:00
Pawel Dziepak
4824f7630b kernel: Add sequential lock implementation 2013-11-05 04:16:13 +01:00
Pawel Dziepak
958f6d00aa kernel: Make UserEvent::Fire() work without gSchedulerLock held 2013-11-04 23:53:20 +01:00
Pawel Dziepak
3c819aaa72 kernel: DPC: remove schedulerLocked argument 2013-11-04 23:51:18 +01:00
Pawel Dziepak
11cacd0c13 kernel: Remove thread_block_with_timeout_locked() 2013-11-04 23:45:14 +01:00
Pawel Dziepak
c2763aaffb kernel: Add spinlock for undertaker data 2013-10-31 02:34:09 +01:00
Pawel Dziepak
d8fcc8a825 kernel: Remove B_TIMER_ACQUIRE_SCHEDULER_LOCK flag
The flag main purpose is to avoid race conditions between event handler
and cancel_timer(). However, cancel_timer() is safe even without
using gSchedulerLock.

If the event is scheduled to happen on other CPU than the CPU that
invokes cancel_timer() then cancel_timer() either disables the event
before its handler starts executing or waits until the event handler
is done.

If the event is scheduled on the same CPU that calls cancel_timer()
then, since cancel_timer() disables interrupts, the event is either
executed before cancel_timer() or when the timer interrupt handler
starts running the event is already disabled.
2013-10-31 01:49:43 +01:00
Pawel Dziepak
c8dd9f7780 kernel: Add thread_unblock() and use it where possible 2013-10-30 03:58:36 +01:00
Pawel Dziepak
d70728f54d kernel/lock: Do not use *_locked() functions when not needed 2013-10-30 03:26:13 +01:00
Pawel Dziepak
d54a9e0a41 kernel: Do not use gSchedulerLock when accesing UID and GID
Reads and writes to uid_t and gid_t are atomic anyway. The only real
problem that may happen here is inconsistent state of triples
effective_{u, g}id, saved_set_{u, g}id, real_{u, g}id, but team locks
protect us against that.
2013-10-30 02:57:45 +01:00
Pawel Dziepak
1e3cf82d85 scheduler: Manage CPU performance 2013-10-30 00:49:24 +01:00
Pawel Dziepak
22d8248267 kernel: Add support and interface for cpufreq modules 2013-10-30 00:48:07 +01:00
Pawel Dziepak
6d96f462dc scheduler: Use load information to migrate threads 2013-10-28 02:44:46 +01:00
Pawel Dziepak
5e2701a2b5 scheduler: Keep track of the load each thread produces 2013-10-28 01:38:54 +01:00
Pawel Dziepak
dc38e6ca87 scheduler: Use core load to distribute threads 2013-10-28 00:39:16 +01:00
Pawel Dziepak
d80cdf504f scheduler: Keep track of core and logical CPU load 2013-10-27 22:39:56 +01:00
Pawel Dziepak
890ba7415c scheduler: Decide whether to cancel thread penalty 2013-10-27 20:05:20 +01:00
Pawel Dziepak
1df2e75540 scheduler: Increase penalty of waiting threads
The fact that thread is waiting doesn't mean that it is nice to the others.
If the thread, indeed, waits for a longer time its penalty will be cancelled
anyway, however if the thread waits for a very short time do not count that
as being nice since lower priority threads didn't have much chance to run.
2013-10-27 18:14:48 +01:00