when the cpufreq module is loaded, we let the scheduler update its policy.
Improve assert report
CoreEntry::GetLoad() could return more than kMaxLoad.
Change-Id: I127f9b3e8062b5996872aae30b4021b9904fa179
Reviewed-on: https://review.haiku-os.org/c/haiku/+/3216
Reviewed-by: Jérôme Duval <jerome.duval@gmail.com>
This field forces kernel to track each CPU load all the time. It is not
a problem with the current scheduler on a multicore systems, but on
single core machnies or with any other future scheduler this field may
become just an unnecessary burden. It isn't difficult for an application
to compute CPU load by itself when it needs it.
* pin idle threads to their specific CPUs
* allow scheduler to implement SMP_MSG_RESCHEDULE handler
* scheduler_set_thread_priority() reworked
* at reschedule: enqueue old thread after dequeueing the new one
* Thread::scheduler_lock protects thread state, priority, etc.
* sThreadCreationLock protects thread creation and removal and list of
threads in team.
* Team::signal_lock and Team::time_lock protect list of threads in team
as well.
* Scheduler uses its own internal locking.
Simple scheduler behaves exactly the same as affine scheduler with a
single core. Obviously, affine scheduler is more complicated thus
introduces greater overhead but quite a lot of multicore logic has been
disabled on single core systems in the previous commit.
Each thread has its minimal priority that depends on the static priority.
However, it is still able to starve threads with even lower priority
(e.g. CPU bound threads with lower static priority). To prevent this
another penalty is introduced. When the minimal priority is reached
penalty (count mod minimal_priority) is added, where count is the number
of time slices since the thread reached its minimal priority. This prevents
starvation of lower priorirt threads (since all CPU bound threads may have
their priority temporaily reduced to 1) but preserves relation between
static priorities - when there are two CPU bound threads the one with
higher static priority would get more CPU time.
* Reorganized the kernel locking related to threads and teams.
* We now discriminate correctly between process and thread signals. Signal
handlers have been moved to teams. Fixes#5679.
* Implemented real-time signal support, including signal queuing, SA_SIGINFO
support, sigqueue(), sigwaitinfo(), sigtimedwait(), waitid(), and the addition
of the real-time signal range. Closes#1935 and #2695.
* Gave SIGBUS a separate signal number. Fixes#6704.
* Implemented <time.h> clock and timer support, and fixed/completed alarm() and
[set]itimer(). Closes#5682.
* Implemented support for thread cancellation. Closes#5686.
* Moved send_signal() from <signal.h> to <OS.h>. Fixes#7554.
* Lots over smaller more or less related changes.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@42116 a95241bf-73f2-0310-859d-f6bbb57e9c96
* The team and thread kernel structures have been renamed to Team and Thread
respectively and moved into the new BKernel namespace.
* Several (kernel add-on) sources have been converted from C to C++ since
private kernel headers are included that are no longer C compatible.
Changes after merging:
* Fixed gcc 2 build (warnings mainly in the scary firewire bus manager).
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@40196 a95241bf-73f2-0310-859d-f6bbb57e9c96
scheduler. This avoids the need to use the send_signal_etc() work-around for
resume_thread() during the early kernel initialization. Might fix#5851.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36530 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Implemented a tiny bit more sophisticated version of
estimate_max_scheduling_latency() that uses a syscall that lets the scheduler
decide.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36170 a95241bf-73f2-0310-859d-f6bbb57e9c96
- enqueue_in_run_queue() no longer returns whether rescheduling is supposed
to happen. Instead is sets cpu_ent::invoke_scheduler on the current CPU.
- reschedule() does now handle cpu_ent::invoke_scheduler_if_idle(). No need
to let all callers do that.
* thread_unblock[_locked]() no longer return whether rescheduling is supposed
to happen.
* Got rid of the B_INVOKE_SCHEDULER handling. The interrupt hooks really
can't know, when it makes sense to reschedule or not.
* Introduced scheduler_reschedule_if_necessary[_locked]() functions for
checking+invoking the scheduler.
* Some semaphore functions (e.g. delete_sem()) invoke the scheduler now, if
they wake up anything with greater priority.
I've also tried to add scheduler invocations in the condition variable and
mutex/rw_lock code, but that actually has a negative impact on performance,
probably because it causes too much ping-ponging between threads when
multiple locking primitives are involved.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34657 a95241bf-73f2-0310-859d-f6bbb57e9c96
* scheduler_enqueue_in_runqueue() now allows the scheduler to return a hint as to whether a reschedule is desirable or not. This is used in a few other places in order to relegate scheduling decisions entirely to the scheduler rather than the priority hacks previously used. There are probably other places in the kernel that could now make use of that information to more intelligently call reschedule() though.
* Switch over the default scheduler to scheduler_affine().
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@32554 a95241bf-73f2-0310-859d-f6bbb57e9c96
- Moved scheduler listening interface to <listeners.h> and added more
convenient to use templatized notification functions.
- Added a listener mechanism for the wait objects (semaphores, condition
variables, mutex, rw_lock).
* system profiler:
- Hopefully fixed locking issues related to notifying the profiler thread
for good. We still had an inconsistent locking order, since the scheduler
notification callbacks are invoked with the thread lock held and have to
acquire the object lock then, while the other callbacks acquired the object
lock first and as a side effect of ConditionVariable::NotifyOne() acquired
the thread lock. Now we make sure the object lock is the innermost lock.
- Track the number of dropped events due to a full buffer.
_user_system_profiler_next_buffer() returns this count now.
- When scheduling profiling events are requested also listen to wait objects
and generate the respective profiling events. We send those events lazily
and cache the infos to avoid resending an event for the same wait object.
- When starting profiling we do now generate "thread scheduled" events for
the already running threads.
- _user_system_profiler_start(): Check whether the parameters pointer is a
userland address at all.
- The system_profiler_team_added event does now also contain the team's name.
* Added a sem_get_name_unsafe() returning a semaphore's name. It is "unsafe",
since the caller has to ensure that the semaphore exists and continues to
exist as long as the returned name is used.
* Adjusted the "profile" and "scheduling_recorder" according to the system
profiling changes. The latter prints the number of dropped events, now.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@30345 a95241bf-73f2-0310-859d-f6bbb57e9c96
scheduler_set_thread_priority(). Setting the thread priority was the
only situation in which it was used.
* Renamed scheduler.cpp to scheduler_simple.cpp.
* The scheduler functions are no longer called directly. Instead there's
an operation vector now, which is initialized at kernel init time.
This allows for picking the most suitable scheduler for the machine
(e.g. a non-SMP scheduler on a non-SMP machine).
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@28262 a95241bf-73f2-0310-859d-f6bbb57e9c96
unscheduled was incorrect.
* Introduced _kern_analyze_scheduling() syscall. It requires scheduler
kernel tracing to be enabled. It uses the tracing entries for a given
period of time to do a similar analysis the "scheduler" debugger
command does (i.e. number of runs, run time, latencies, preemption
times) for each thread. Additionally the analysis includes for each
thread how long the thread waited on each locking primitive in total.
* Added kernel tracing for the creation of semaphores and initialization
of condition variables, mutexes, and rw locks. The enabling macro is
SCHEDULING_ANALYSIS_TRACING. The only purpose is to provide
_kern_analyze_scheduling() with more info on the locking primitives
(the name in particular).
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@27304 a95241bf-73f2-0310-859d-f6bbb57e9c96
Extracted scheduler_init() from start_scheduler() (which is now called scheduler_start()).
Moved scheduler related function prototypes from thread.h to the new scheduler.h.
Cleanup.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@14518 a95241bf-73f2-0310-859d-f6bbb57e9c96