Commit Graph

27 Commits

Author SHA1 Message Date
Pawel Dziepak
81e04d7b97 kernel: Remove cpu_info::load
This field forces kernel to track each CPU load all the time. It is not
a problem with the current scheduler on a multicore systems, but on
single core machnies or with any other future scheduler this field may
become just an unnecessary burden. It isn't difficult for an application
to compute CPU load by itself when it needs it.
2014-01-03 19:44:57 +01:00
Pawel Dziepak
15a7f2046a kernel: Return CPU load in cpu_info 2013-12-29 22:47:06 +01:00
Pawel Dziepak
1b06228f13 kernel: Propagate scheduler modes to cpu{freq, idle} modules 2013-12-17 23:26:37 +01:00
Pawel Dziepak
2b7ea4cddf kernel: Remove Thread::next_state 2013-11-29 19:31:10 +01:00
Pawel Dziepak
286b341a40 kernel: Merge two occurences of thread resume code 2013-11-28 14:03:57 +01:00
Pawel Dziepak
03f7d3d1db kernel: Restore logical processor disabling 2013-11-24 22:51:07 +01:00
Pawel Dziepak
308f594e2a kernel, libroot: Make scheduler modes interface public 2013-11-20 23:32:40 +01:00
Pawel Dziepak
9c2e74da04 scheduler: Move mode specific logic to separate files 2013-11-20 09:46:59 +01:00
Pawel Dziepak
288a2664a2 scheduler: Remove sSchedulerInternalLock
* pin idle threads to their specific CPUs
 * allow scheduler to implement SMP_MSG_RESCHEDULE handler
 * scheduler_set_thread_priority() reworked
 * at reschedule: enqueue old thread after dequeueing the new one
2013-11-13 05:31:58 +01:00
Pawel Dziepak
03fb2d8868 kernel: Remove gSchedulerLock
* Thread::scheduler_lock protects thread state, priority, etc.
 * sThreadCreationLock protects thread creation and removal and list of
   threads in team.
 * Team::signal_lock and Team::time_lock protect list of threads in team
   as well.
 * Scheduler uses its own internal locking.
2013-11-08 02:41:26 +01:00
Pawel Dziepak
d3e5752b11 scheduler: Performance mode is actually low latency mode 2013-11-07 01:50:20 +01:00
Pawel Dziepak
978fc08065 scheduler: Remove support for running different schedulers
Simple scheduler behaves exactly the same as affine scheduler with a
single core. Obviously, affine scheduler is more complicated thus
introduces greater overhead but quite a lot of multicore logic has been
disabled on single core systems in the previous commit.
2013-10-24 02:04:03 +02:00
Pawel Dziepak
cd8d4e39fd kernel: Introduce scheduler modes of operation 2013-10-21 02:17:00 +02:00
Pawel Dziepak
fee8009184 kernel: Add another penalty for CPU bound threads
Each thread has its minimal priority that depends on the static priority.
However, it is still able to starve threads with even lower priority
(e.g. CPU bound threads with lower static priority). To prevent this
another penalty is introduced. When the minimal priority is reached
penalty (count mod minimal_priority) is added, where count is the number
of time slices since the thread reached its minimal priority. This prevents
starvation of lower priorirt threads (since all CPU bound threads may have
their priority temporaily reduced to 1) but preserves relation between
static priorities - when there are two CPU bound threads the one with
higher static priority would get more CPU time.
2013-10-09 20:13:47 +02:00
Pawel Dziepak
a2bdd2842f kernel: Add scheduler_op for dumping thread data 2013-10-09 02:08:49 +02:00
Ingo Weinhold
24df65921b Merged signals-merge branch into trunk with the following changes:
* Reorganized the kernel locking related to threads and teams.
* We now discriminate correctly between process and thread signals. Signal
  handlers have been moved to teams. Fixes #5679.
* Implemented real-time signal support, including signal queuing, SA_SIGINFO
  support, sigqueue(), sigwaitinfo(), sigtimedwait(), waitid(), and the addition
  of the real-time signal range. Closes #1935 and #2695.
* Gave SIGBUS a separate signal number. Fixes #6704.
* Implemented <time.h> clock and timer support, and fixed/completed alarm() and
  [set]itimer(). Closes #5682.
* Implemented support for thread cancellation. Closes #5686.
* Moved send_signal() from <signal.h> to <OS.h>. Fixes #7554.
* Lots over smaller more or less related changes.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@42116 a95241bf-73f2-0310-859d-f6bbb57e9c96
2011-06-12 00:00:23 +00:00
Ingo Weinhold
4535495d80 Merged the signals branch into trunk, with these changes:
* The team and thread kernel structures have been renamed to Team and Thread
  respectively and moved into the new BKernel namespace.
* Several (kernel add-on) sources have been converted from C to C++ since
  private kernel headers are included that are no longer C compatible.

Changes after merging:
* Fixed gcc 2 build (warnings mainly in the scary firewire bus manager).


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@40196 a95241bf-73f2-0310-859d-f6bbb57e9c96
2011-01-10 21:54:38 +00:00
Ingo Weinhold
28d05e026f Make scheduler_reschedule() an no-op until we're ready to start the
scheduler. This avoids the need to use the send_signal_etc() work-around for
resume_thread() during the early kernel initialization. Might fix #5851.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36530 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-04-29 15:10:37 +00:00
Axel Dörfler
ee0d2be9e4 bonefish+axeld:
* Implemented a tiny bit more sophisticated version of
  estimate_max_scheduling_latency() that uses a syscall that lets the scheduler
  decide.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36170 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-04-11 20:40:58 +00:00
Ingo Weinhold
0338371f26 * All scheduler implementations:
- enqueue_in_run_queue() no longer returns whether rescheduling is supposed
    to happen. Instead is sets cpu_ent::invoke_scheduler on the current CPU.
  - reschedule() does now handle cpu_ent::invoke_scheduler_if_idle(). No need
    to let all callers do that.
* thread_unblock[_locked]() no longer return whether rescheduling is supposed
  to happen.
* Got rid of the B_INVOKE_SCHEDULER handling. The interrupt hooks really
  can't know, when it makes sense to reschedule or not.
* Introduced scheduler_reschedule_if_necessary[_locked]() functions for
  checking+invoking the scheduler.
* Some semaphore functions (e.g. delete_sem()) invoke the scheduler now, if
  they wake up anything with greater priority.
  I've also tried to add scheduler invocations in the condition variable and
  mutex/rw_lock code, but that actually has a negative impact on performance,
  probably because it causes too much ping-ponging between threads when
  multiple locking primitives are involved.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34657 a95241bf-73f2-0310-859d-f6bbb57e9c96
2009-12-13 21:18:27 +00:00
Rene Gollent
009ccc2962 anevilyak+mmlr:
* scheduler_enqueue_in_runqueue() now allows the scheduler to return a hint as to whether a reschedule is desirable or not. This is used in a few other places in order to relegate scheduling decisions entirely to the scheduler rather than the priority hacks previously used. There are probably other places in the kernel that could now make use of that information to more intelligently call reschedule() though.
* Switch over the default scheduler to scheduler_affine().



git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@32554 a95241bf-73f2-0310-859d-f6bbb57e9c96
2009-08-21 04:11:40 +00:00
Ingo Weinhold
227fe7d34a * Scheduler/wait object listener:
- Moved scheduler listening interface to <listeners.h> and added more
    convenient to use templatized notification functions.
  - Added a listener mechanism for the wait objects (semaphores, condition
    variables, mutex, rw_lock).
* system profiler:
  - Hopefully fixed locking issues related to notifying the profiler thread
    for good. We still had an inconsistent locking order, since the scheduler
    notification callbacks are invoked with the thread lock held and have to
    acquire the object lock then, while the other callbacks acquired the object
    lock first and as a side effect of ConditionVariable::NotifyOne() acquired
    the thread lock. Now we make sure the object lock is the innermost lock.
  - Track the number of dropped events due to a full buffer.
    _user_system_profiler_next_buffer() returns this count now.
  - When scheduling profiling events are requested also listen to wait objects
    and generate the respective profiling events. We send those events lazily
    and cache the infos to avoid resending an event for the same wait object.
  - When starting profiling we do now generate "thread scheduled" events for
    the already running threads.
  - _user_system_profiler_start(): Check whether the parameters pointer is a
    userland address at all.
  - The system_profiler_team_added event does now also contain the team's name.
* Added a sem_get_name_unsafe() returning a semaphore's name. It is "unsafe",
  since the caller has to ensure that the semaphore exists and continues to
  exist as long as the returned name is used.
* Adjusted the "profile" and "scheduling_recorder" according to the system
  profiling changes. The latter prints the number of dropped events, now.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@30345 a95241bf-73f2-0310-859d-f6bbb57e9c96
2009-04-23 13:47:52 +00:00
Ingo Weinhold
79257a4ad6 Added a listener mechanism to the scheduler (ATM only for scheduler_simple).
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@30242 a95241bf-73f2-0310-859d-f6bbb57e9c96
2009-04-18 17:24:58 +00:00
Rene Gollent
0296b82ae6 Add several extra scheduler hook functions to allow the scheduler(s) to maintain private housekeeping data on the thread structs. These hooks are called on thread creation/destruction and when prepping a thread for use. Review welcome.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@29721 a95241bf-73f2-0310-859d-f6bbb57e9c96
2009-03-26 00:58:20 +00:00
Ingo Weinhold
53892c92a0 * Replaced scheduler_remove_from_run_queue() by
scheduler_set_thread_priority(). Setting the thread priority was the
  only situation in which it was used.
* Renamed scheduler.cpp to scheduler_simple.cpp.
* The scheduler functions are no longer called directly. Instead there's
  an operation vector now, which is initialized at kernel init time.
  This allows for picking the most suitable scheduler for the machine
  (e.g. a non-SMP scheduler on a non-SMP machine).


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@28262 a95241bf-73f2-0310-859d-f6bbb57e9c96
2008-10-21 12:37:13 +00:00
Ingo Weinhold
020ac56840 * Fixed bug in the "scheduler" command: The check when a thread was
unscheduled was incorrect.
* Introduced _kern_analyze_scheduling() syscall. It requires scheduler
  kernel tracing to be enabled. It uses the tracing entries for a given
  period of time to do a similar analysis the "scheduler" debugger
  command does (i.e. number of runs, run time, latencies, preemption
  times) for each thread. Additionally the analysis includes for each
  thread how long the thread waited on each locking primitive in total.
* Added kernel tracing for the creation of semaphores and initialization
  of condition variables, mutexes, and rw locks. The enabling macro is
  SCHEDULING_ANALYSIS_TRACING. The only purpose is to provide
  _kern_analyze_scheduling() with more info on the locking primitives
  (the name in particular).


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@27304 a95241bf-73f2-0310-859d-f6bbb57e9c96
2008-09-03 15:10:44 +00:00
Axel Dörfler
6cd505cee7 Changed the boot procedure a bit.
Extracted scheduler_init() from start_scheduler() (which is now called scheduler_start()).
Moved scheduler related function prototypes from thread.h to the new scheduler.h.
Cleanup.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@14518 a95241bf-73f2-0310-859d-f6bbb57e9c96
2005-10-25 16:59:12 +00:00