been removed. That includes CPU disabling and thread pinning, as that becomes
pointless with only one CPU.
* Return a proper reschedule hint on enqueing a thread, based on the priority
of the current thread vs. the enqueued one.
* Enable dynamic scheduler selection. With one CPU the simple scheduler will
be used, otherwise affine is selected.
* Removed the scheduler type define as we now always auto-select it.
* Some cleanup.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@32573 a95241bf-73f2-0310-859d-f6bbb57e9c96
* scheduler_enqueue_in_runqueue() now allows the scheduler to return a hint as to whether a reschedule is desirable or not. This is used in a few other places in order to relegate scheduling decisions entirely to the scheduler rather than the priority hacks previously used. There are probably other places in the kernel that could now make use of that information to more intelligently call reschedule() though.
* Switch over the default scheduler to scheduler_affine().
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@32554 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Keep track of the currently running threads.
* Make use of that info to decide if a thread that becomes ready should preempt
the running thread.
* If we should preempt we send the target CPU a reschedule message.
* This preemption strategy makes keeping track of idle CPUs by means of a bitmap
superflous and it is therefore removed.
* Right now only other CPUs are preempted though, not the current one.
* Add missing initialization of the quantum tracking code.
* Do not extend the quantum of the idle thread based quantum tracking as we want
it to not run longer than necessary. Once the preemption works completely
adding a quantum timer for the idle thread will become unnecessary though.
* Fix thread stealing code, it did missed the last thread in the run queue.
* When stealing, try to steal the highest priority thread that is currently
waiting by taking priorities into account when finding the target run queue.
* Simplify stealing code a bit as well.
* Minor cleanups.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@32503 a95241bf-73f2-0310-859d-f6bbb57e9c96
re-insert it at a new place, but by only setting the priority and not the
next_priority field, the thread would actually be enqueued at the same priority
level as before. Didn't cause any real damage, guess it was just an oversight.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@32494 a95241bf-73f2-0310-859d-f6bbb57e9c96
is a syscall iframe.
* User debugger support: Don't to call BreakpointManager::PrepareToContinue(),
if the thread returns from a syscall. We don't want to skip breakpoints in
that case.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@31223 a95241bf-73f2-0310-859d-f6bbb57e9c96
- Moved scheduler listening interface to <listeners.h> and added more
convenient to use templatized notification functions.
- Added a listener mechanism for the wait objects (semaphores, condition
variables, mutex, rw_lock).
* system profiler:
- Hopefully fixed locking issues related to notifying the profiler thread
for good. We still had an inconsistent locking order, since the scheduler
notification callbacks are invoked with the thread lock held and have to
acquire the object lock then, while the other callbacks acquired the object
lock first and as a side effect of ConditionVariable::NotifyOne() acquired
the thread lock. Now we make sure the object lock is the innermost lock.
- Track the number of dropped events due to a full buffer.
_user_system_profiler_next_buffer() returns this count now.
- When scheduling profiling events are requested also listen to wait objects
and generate the respective profiling events. We send those events lazily
and cache the infos to avoid resending an event for the same wait object.
- When starting profiling we do now generate "thread scheduled" events for
the already running threads.
- _user_system_profiler_start(): Check whether the parameters pointer is a
userland address at all.
- The system_profiler_team_added event does now also contain the team's name.
* Added a sem_get_name_unsafe() returning a semaphore's name. It is "unsafe",
since the caller has to ensure that the semaphore exists and continues to
exist as long as the returned name is used.
* Adjusted the "profile" and "scheduling_recorder" according to the system
profiling changes. The latter prints the number of dropped events, now.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@30345 a95241bf-73f2-0310-859d-f6bbb57e9c96
manipulating the queue is a particularly unsuitable place for calling the
listeners, as they wouldn't be allowed to e.g. unblock threads, since that
would screw the run queue.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@30338 a95241bf-73f2-0310-859d-f6bbb57e9c96
* We started the "main2" thread too late. Since the scheduler was already
started on all CPUs, the idle thread could wait (for a mutex) while
spawing the "main2" thread. This violated the assumption in the scheduler
that all idle threads would always be ready or running. We now create the
thread while the kernel runs still single-threaded.
* scheduler_start() is now invoked with interrupts still disabled. We enable
them after the function returns. This prevents scheduler_reschedule() from
potentially being invoked before scheduler_start().
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@29914 a95241bf-73f2-0310-859d-f6bbb57e9c96
1) We now maintain a runqueue per CPU, rather than a single global shared queue. Idle threads are segregated into their own queue for simplicity.
2) Enqueueing threads is now somewhat more intelligent - if the thread is pinned, it is always enqueued onto that core. Otherwise we enqueue it on whichever CPU it previously ran, unless it either hasn't run before, or that core has been disabled via ProcessController. If so, we try to enqueue it on whichever core has been the most idle recently.
3) The above allow various simplifications to thread scheduling. Pinned threads and/or disabled cores are now no longer special cases that need to be dealt with. If a CPU has no threads ready, it looks for another one to steal a thread from, though that part still needs some tuning along with enqueueing for load balancing purposes.
The chief aim here is better load balancing and support for soft affinity. However, at the moment the overall behavior still exhibits some regressions compared to the old scheduler, so it's disabled by default. If you wish to experiment/debug with it, instructions for enabling it can be found in scheduler.cpp. Much thanks to Ingo, Axel and everyone who's helped with either code review/advice or testing so far.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@29643 a95241bf-73f2-0310-859d-f6bbb57e9c96
scheduler_set_thread_priority(). Setting the thread priority was the
only situation in which it was used.
* Renamed scheduler.cpp to scheduler_simple.cpp.
* The scheduler functions are no longer called directly. Instead there's
an operation vector now, which is initialized at kernel init time.
This allows for picking the most suitable scheduler for the machine
(e.g. a non-SMP scheduler on a non-SMP machine).
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@28262 a95241bf-73f2-0310-859d-f6bbb57e9c96
scheduler tracing and scheduler analysis code into separate source
files.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@28253 a95241bf-73f2-0310-859d-f6bbb57e9c96