This field forces kernel to track each CPU load all the time. It is not
a problem with the current scheduler on a multicore systems, but on
single core machnies or with any other future scheduler this field may
become just an unnecessary burden. It isn't difficult for an application
to compute CPU load by itself when it needs it.
A bit hackish implementation of a profiler for the scheduler.
SCHEDULER_ENTER_FUNCTION at the begining of each function aren't nice and
usage of __PRETTY_FUNCTION__ isn't any better (both gcc and clang support
it though), but it was quick to implement and doesn't lose information
on inlined functions. It's just a tool, not an integral part of the kernal
anyway.
Apart from the refactoring this commit takes the opportunity and removes
unnecessary read locks when choosing a package and a core from idle lists.
The data structures are accessed in a thread safe way and it does not really
matter whether the obtained data becomes outdated just when we release the
lock or during our search for the appropriate package/core.
Some SMT implementations (e.g. recent AMD microarchitectures) have
separate L1d cache for each SMT thread (which AMD decides to call "cores").
This means that we shouldn't move threads to another logical processor too
often even if it belongs to the same core. We aren't very strict about
this as it would complicate load balancing, but we try to reduce unnecessary
migrations.
atomic_{get, set}64() are problematic on architectures without 64 bit
compare and swap.
Also, using sequential lock instead of atomic access ensures that
any reads from cpu_ent::active_time won't require any writes to shared
memory.
The client code is not supposed to change the topology info.
It would be also nice if cpu_topology_node::children was an array of
pointers to const but that would require several const_casts in the
topology tree generation code so it's probably not worth it.
Apparently, reading from dr3 is slower than reading from memory
with cache hit.
Also, depending on hypervisor configuration, accessing dr3 may cause
a VM exit (and, at least on kvm, it does), what makes it much slower
than a memory access even when there is a cache miss.