This reverts commit 3fbb24680c.
As I mentioned in #11131, this fix is not correct, and works around
the problem. The real reason was that arch_debug_call_with_fault_handler
was not working properly, so the fault handler went crazy.
With commit eb92810 that is fixed so this can be reverted.
If GCC knows what these functions are actually doing the resulting
code can be optimized better what is especially noticeable in case of
invocations of atomic_{or,and}() that ignore the result. Obviously,
everything is inlined what also improves performance.
Signed-off-by: Paweł Dziepak <pdziepak@quarnos.org>
When an ARMv7 CPU is detected, immediately turn on the FPU. This allows
us to use vsnprintf in the TRACE call in that function, as our libc is
compiled with floating point support and will trigger a fault if the FPU
is not available.
This lets the boot go further, and crash in mmu_init. Next steps:
* Find why mmu_init is crashing
* Setup some fault handlers, otherwise we call uboot ones, and they are
not very helpful. They will also probably not work once the mmu is
enabledvery helpful. They will also probably not work once the mmu is
enabledvery helpful. They will also probably not work once the mmu is
enabled...
This patch makes it possible to inline rdmsr and wrmsr instruction. The
performance impact shouldn't be significant since they are used relatively
rarely and wrmsr is usually a serializing instruction, but there is no reason
not to do so.
The goal of this patch is to amortize the cost of context switch by making
the compiler aware that context switch clobbers all registers. Because all
register need to be saved anyway there is no additional cost of using
callee saved register in the function that does the context switch.
Similarly to previous patch regarding GDT this is mostly a rewrite of
IDT handling code from C to C++. Thanks to constexpr IDT is now entirely
generated at compile-time.
Virtually no functional change, just rewriting the code from
"C in *.cpp files" to C++. Use of constexpr may be advantageous but
that code is not performance critical anyway.
* Instead of forcing the hash-table to use a copy of the key,
introduce and use TypeOperation template to avoid taking a
reference of a reference type (which gcc2 doesn't allow).
For potential boot volumes with older packages states the respective
item in the boot volume menu now has a sub menu for selecting a state.
The boot loader functionality for this feature is complete -- i.e. the
respective kernel is loaded and the name of the old state is added to
the kernel args -- but kernel packagefs and package daemon support is
still missing.
After load_image() the child thread is suspended and the parent is
expected to resume it later. However, it is possible that the parent
attempts to resume its child after it has been notified that the image
had been loaded but before the child managed to suspend itself. In such
case the child would suspends itself after that wake up attempt and,
consequently will not be ever resumed.
To mitigate that problem flag Thread::going_to_suspend has been added
which helps synchronizing thread suspension and continuation in a similar
way that "traditional" thread blocking is performed. This means that
the child should behave in a following manner: set its going_to_suspend flag,
notify the parent (i.e. any thread that may want to resume it), acquire
its scheduler_lock and suspend itself if the going_to_suspend flag is set.
The parent should follow pattern: clear going_to_suspend flag of the thread
that is about to be resumed, acquire that thread scheduler_lock and enqueue
it in a run queue if it is suspended.
Thanks Oliver for reporting the bug and identifying what causes it.
Most of the actual UserEvent work is done in DPC so that we don't have
to care about the limitations of the context in which UserEvent::Fire()
is invoked. This requires appropriate management of lifetime of UserEvent
instances to make sure that DoDPC() method is always called on a valid
object.
* Add isb just because.
* pdziepak pointed out that ARMv5 and before
had different barrier support.
* pdziepak also mentioned that dsb was too strong
for __sync_synchronize
* On ARMv6 or older, we do a simulated dsb.
* Move __sync_synchronize into thread.c in libroot
and use the new arch_atomic.h dsb/dmb defines.
* Gets arm @bootstrap-raw to end of bootstrap.
* Don't assume verdex as it isn't clear this was
occurring.
* Make an educated guess on HAIKU_BOOT_PLATFORM
based on provided board (but still allow it to
be overridden)
* Error out if user doesn't populate
HAIKU_BOOT_PLATFORM or enters an unknown board
name.
* You need to add "-sHAIKU_BOOT_BOARD=xxx" to
your jam to build for the proper ARM device.
* Rename beagle to beagleboneblk as per the
documentation.
* Use atomic_get_and_set for return value
* Atomics are no longer volatile
* Add missing arch_cpu_pause stub
* Move arch_cpu_idle to arch_cpu header to match
other architectures
For non-US keyboards, the extra 102th/105th key is used to reach \. But,
we also need it to report | when shifted (this is the key left to
"enter").
This affects only USB keyboards. Thanks to gordoncjp for reporting!
UserEvent can be fired from scheduler_reschedule() i.e. while holding current
thread scheduler_lock. If the current thread goes sleep and during reschedule
one of its timers sends a signel to it, then scheduler_enqueue_in_run_queue()
attempts to acquire again its scheduler_lock resulting in a deadlock.
There was also a minor issue with both scheduler_reschedule() and
scheduler_enqueue_in_run_queue() acquiring current CPU scheduler mode lock.
* Set max cpu to 1 for PPC until atomic functions are finished
* We have atomic functions inline in the kernel and assembly
code in libroot post-scheduler merge... isn't that a lot of
duplication?
Add boot loader debug menu option "Save syslog from previous session
during boot". If enabled (defaults to true), the previous session's
debug syslog data is copy to a separate buffer and passed to the
kernel, which writes it back to the file /var/log/previous_syslog.
As long as Haiku still boots, this should now be the most convenient way
to retrieve the output from a kernel crash.
Previous implementation based on the actual load of each core and share
each thread has in that load turned up to be very problematic when
balancing load on very heavily loaded systems (i.e. more threads
consuming all available CPU time than there is logical CPUs).
The new approach is to estimate how much load would a thread produce
if it had all CPU time only for itself. Summing such load estimations
of each thread assigned to a given core we get a rank that contains
much more information than just simple actual core load.
This field forces kernel to track each CPU load all the time. It is not
a problem with the current scheduler on a multicore systems, but on
single core machnies or with any other future scheduler this field may
become just an unnecessary burden. It isn't difficult for an application
to compute CPU load by itself when it needs it.