* Add isb just because.
* pdziepak pointed out that ARMv5 and before
had different barrier support.
* pdziepak also mentioned that dsb was too strong
for __sync_synchronize
* On ARMv6 or older, we do a simulated dsb.
* Move __sync_synchronize into thread.c in libroot
and use the new arch_atomic.h dsb/dmb defines.
* Gets arm @bootstrap-raw to end of bootstrap.
* Don't assume verdex as it isn't clear this was
occurring.
* Make an educated guess on HAIKU_BOOT_PLATFORM
based on provided board (but still allow it to
be overridden)
* Error out if user doesn't populate
HAIKU_BOOT_PLATFORM or enters an unknown board
name.
* You need to add "-sHAIKU_BOOT_BOARD=xxx" to
your jam to build for the proper ARM device.
* Rename beagle to beagleboneblk as per the
documentation.
* Use atomic_get_and_set for return value
* Atomics are no longer volatile
* Add missing arch_cpu_pause stub
* Move arch_cpu_idle to arch_cpu header to match
other architectures
For non-US keyboards, the extra 102th/105th key is used to reach \. But,
we also need it to report | when shifted (this is the key left to
"enter").
This affects only USB keyboards. Thanks to gordoncjp for reporting!
UserEvent can be fired from scheduler_reschedule() i.e. while holding current
thread scheduler_lock. If the current thread goes sleep and during reschedule
one of its timers sends a signel to it, then scheduler_enqueue_in_run_queue()
attempts to acquire again its scheduler_lock resulting in a deadlock.
There was also a minor issue with both scheduler_reschedule() and
scheduler_enqueue_in_run_queue() acquiring current CPU scheduler mode lock.
* Set max cpu to 1 for PPC until atomic functions are finished
* We have atomic functions inline in the kernel and assembly
code in libroot post-scheduler merge... isn't that a lot of
duplication?
Add boot loader debug menu option "Save syslog from previous session
during boot". If enabled (defaults to true), the previous session's
debug syslog data is copy to a separate buffer and passed to the
kernel, which writes it back to the file /var/log/previous_syslog.
As long as Haiku still boots, this should now be the most convenient way
to retrieve the output from a kernel crash.
Previous implementation based on the actual load of each core and share
each thread has in that load turned up to be very problematic when
balancing load on very heavily loaded systems (i.e. more threads
consuming all available CPU time than there is logical CPUs).
The new approach is to estimate how much load would a thread produce
if it had all CPU time only for itself. Summing such load estimations
of each thread assigned to a given core we get a rank that contains
much more information than just simple actual core load.
This field forces kernel to track each CPU load all the time. It is not
a problem with the current scheduler on a multicore systems, but on
single core machnies or with any other future scheduler this field may
become just an unnecessary burden. It isn't difficult for an application
to compute CPU load by itself when it needs it.
atomic_{get, set}64() are problematic on architectures without 64 bit
compare and swap.
Also, using sequential lock instead of atomic access ensures that
any reads from cpu_ent::active_time won't require any writes to shared
memory.
The client code is not supposed to change the topology info.
It would be also nice if cpu_topology_node::children was an array of
pointers to const but that would require several const_casts in the
topology tree generation code so it's probably not worth it.
Apparently, reading from dr3 is slower than reading from memory
with cache hit.
Also, depending on hypervisor configuration, accessing dr3 may cause
a VM exit (and, at least on kvm, it does), what makes it much slower
than a memory access even when there is a cache miss.
Add get_safemode_option_early() and get_safemode_boolean_early() to get
safemode options before the kernel heap has been initialized. They use a
simplified parser.
* VMTranslationMap:
- Add DebugPrintMappingInfo(): Given a virtual address it is supposed
to print the paging structure information for that address. To be
implemented by derived classes.
- Add DebugGetReverseMappingInfo(): Given a physical addresss it is
supposed to find all virtual addresses mapped to it. To be
implemented by derived classes.
* X86VMTranslationMapPAE: Implement the new methods
DebugPrintMappingInfo() and DebugGetReverseMappingInfo().
* Add KDL command "mapping". It supports both virtual address lookups
and reverse lookups.
* VMAddressSpace: Add randomizingEnabled property.
* VMUserAddressSpace: Randomize addresses only when randomizingEnabled
property is set.
* create_team_arg(): Check, if the team's environment contains
"DISABLE_ASLR=1". Set the team's address space property
randomizingEnabled accordingly in load_image_internal() and
exec_team().
* Create new interface for cpuidle modules (similar to the cpufreq
interface)
* Generic cpuidle module is no longer needed
* Fix and update Intel C-State module
It's a browser for the system package content, where entries can be
selected to blacklist them. The selected entries are removed from the
packagefs instance in the boot loader, so that e.g. selected drivers
won't be picked up. The paths are also added to the safe mode driver
settings and will be interpreted when the system packagefs instance is
mounted by the kernel.
* Make Menu and MenuItem polymorphic.
* MenuItem:
- Make SetMarked() virtual, so it can be overridden.
- Add SetSubmenu() and Supermenu().
- Delete the submenu in the destructor.
* Menu:
- Add Entered()/Exited() hooks. They frame the time the user navigates
the menu or any of its submenus. The hooks allow for subclasses
populating their item list dynamically.
- Add SortItems().
* Update boot loader menu copyright text to include 2013, now that it is
over soon. :-)
* pin idle threads to their specific CPUs
* allow scheduler to implement SMP_MSG_RESCHEDULE handler
* scheduler_set_thread_priority() reworked
* at reschedule: enqueue old thread after dequeueing the new one
* Thread::scheduler_lock protects thread state, priority, etc.
* sThreadCreationLock protects thread creation and removal and list of
threads in team.
* Team::signal_lock and Team::time_lock protect list of threads in team
as well.
* Scheduler uses its own internal locking.
* The UNMAP command is theoretically much faster, as it can get many block
ranges instead of just a single range.
* Furthermore, the ATA TRIM command resembles it much better.
* Therefore, fs_trim_data now gets an array of ranges, and we use SCSI UNMAP
to trim.
* Updated BFS code to collect array ranges to fully support the new
fs_trim_data possibilities.
* No need for the atomically changed variables to be declared as
volatile.
* Drop support for atomically getting and setting unaligned data.
* Introduce atomic_get_and_set[64]() which works the same as
atomic_set[64]() used to. atomic_set[64]() does not return the
previous value anymore.
The flag main purpose is to avoid race conditions between event handler
and cancel_timer(). However, cancel_timer() is safe even without
using gSchedulerLock.
If the event is scheduled to happen on other CPU than the CPU that
invokes cancel_timer() then cancel_timer() either disables the event
before its handler starts executing or waits until the event handler
is done.
If the event is scheduled on the same CPU that calls cancel_timer()
then, since cancel_timer() disables interrupts, the event is either
executed before cancel_timer() or when the timer interrupt handler
starts running the event is already disabled.
* Replace ports list mutex with R/W-lock.
* Move team port list protection to separate array of mutexes.
Relieve contention on sPortsLock by removing Team::port_list from its
protected items. With this, set_port_owner() only needs to acquire the
sPortsLock for reading.
* Add another hash table holding the ports by name. Used by find_port()
so it doesn't have to iterate over the list anymore.
* Use slab-based memory allocator for port messages. sPortQuotaLock was
acquired on every message send or receive and was thus another point
of contention. The lock is not necessary anymore.
* Lock for port hashes and Port::lock are no longer locked in a nested
fashion to reduce chances of blocking other threads.
* Make operations concurrency-safe by adding an atomically accessed
Port::state which provides linearization points to port creation and
deletion. Both operations are now divided into logical and physical
parts, the logical part just updating the state and the physical part
adding/remove it to/from the port hash and team port list.
* set_port_owner() is the only remaining function which still locks
Port::lock and one or two of sTeamListLock[] in a nested fashion.
Since it needs to move the port from one team list to another and
change Port::owner, there's no way around.
* Ports are now reference counted to make accesses to already-deleted
ports safe.
* Should fix#8007.
Simple scheduler behaves exactly the same as affine scheduler with a
single core. Obviously, affine scheduler is more complicated thus
introduces greater overhead but quite a lot of multicore logic has been
disabled on single core systems in the previous commit.
There is a global heap of cores, where the key is the highest priority
of threads running on that core. Moreover, for each core there is
a heap of logical processors on this core where the key is the priority
of currently running thread.
The per-core heap is used for load balancing among logical processors
on that core. The global heap is used in initial decision where to put
the thread (note that the algorithm that makes this decision is not
complete yet).
Simple scheduler is used when we do not have to worry about cache affinity
(i.e. single core with or without SMT, multicore with all cache levels
shared).
When we replace gSchedulerLock with more fine grained locking affine
scheduler should also be chosen when logical CPU count is high (regardless
of cache).
In SMP systems simple scheduler will be used only when all logical
processors share all levels of cache and the number of CPUs is low.
In such systems we do not have to care about cache affinity and
the contention on the lock protecting shared run queue is low. Single
run queue makes load balancing very simple.
Kernel support for yielding to all (including lower priority) threads
has been removed. POSIX sched_yield() remains unchanged.
If a thread really needs to yield to everyone it can reduce its priority
to the lowest possible and then yield (it will then need to manually
return to its prvious priority upon continuing).
Each thread has its minimal priority that depends on the static priority.
However, it is still able to starve threads with even lower priority
(e.g. CPU bound threads with lower static priority). To prevent this
another penalty is introduced. When the minimal priority is reached
penalty (count mod minimal_priority) is added, where count is the number
of time slices since the thread reached its minimal priority. This prevents
starvation of lower priorirt threads (since all CPU bound threads may have
their priority temporaily reduced to 1) but preserves relation between
static priorities - when there are two CPU bound threads the one with
higher static priority would get more CPU time.
This adds the -mapcs-frame compiler flag for ARM to have "stable"
stack frames, adds support to the kernel for dumping stack crawls,
and initial support for iframes. There' much more functionality
to unlock in KDL, but this makes debugging already a lot more
comfortable.....
Since both platforms can boot the same kernel we must accept either
arg, so we make sure they are identical for now.
TODO: use a union or KMessage maybe?
* Mostly useful for virtualization at the moment. Works in QEmu.
* Can be enabled by safemode settings/menu.
* Please note that x2APIC normally requires use of VT-d interrupt remapping feature
on real hardware, which we don't support yet.
- Instead of implicitly registering and unregistering a service
instance on construction/destruction, DefaultNotificationService
now exports explicit Register()/Unregister() calls, which subclasses
are expected to call when they're ready.
- Adjust all implementing subclasses. Resolves an issue with deadlocks
when booting a DEBUG=1 build.
* Add "bool kernel" parameter to vfs_entry_ref_to_path(), so it can be
specified for which I/O context the entry ref shall be translated.
* _user_entry_ref_to_path(): Use the calling team's I/O context instead
of the kernel's. Fixes the bug that in a chroot the syscall would
return a path for outside the chroot.
Currently there are two generators. The fast one is the same one the scheduler
is using. The standard one is the same algorithm libroot's rand() uses. Should
there be a need for more cryptographically PRNG MD4 or MD5 might be a good
candidates.
Placing commpage and team user data somewhere at the top of the user accessible
virtual address space prevents these areas from conflicting with elf images
that require to be mapped at exact address (in most cases: runtime_loader).
This patch introduces randomization of commpage position. From now on commpage
table contains offsets from begining to of the commpage to the particular
commpage entry. Similary addresses of symbols in ELF memory image "commpage"
are just offsets from the begining of the commpage.
This patch also updates KDL so that commpage entries are recognized and shown
correctly in stack trace. An update of Debugger is yet to be done.
Set execute disable bit for any page that belongs to area with neither
B_EXECUTE_AREA nor B_KERNEL_EXECUTE_AREA set.
In order to take advanage of NX bit in 32 bit protected mode PAE must be
enabled. Thus, from now on it is also enabled when the CPU supports NX bit.
vm_page_fault() takes additional argument which indicates whether page fault
was caused by an illegal instruction fetch.
The physical memory map area was not included in the kernel virtual
address space range (it was below KERNEL_BASE). This caused problems
if an I/O operation took place on physical memory mapped there (the
bad address error seen in #9547 was occurring in lock_memory_etc()).
Changed KERNEL_BASE and KERNEL_SIZE to cover the area and add a null
area that covers all of it. Also changed X86VMTranslationMap64Bit to
handle large pages in Query(), as the physical map area uses large
pages.
* Added the aforementioned functions.
* create_area_etc() now takes a guard size parameter.
* The thread_info::stack_base/end range now refers to the usable range
only.
Since we're using multi-part uImage format, we can add the FDT as
a seperate "blob" in the uImage, if the used U-Boot version is not
"FDT enabled".
This is used for example for our Verdex target. Currently I've got
a local hack in the platform/u-boot/Jamfile, looking into pulling
in the FDT files and a proper Jam setup to do that properly...
This detects everything up to ARMv6 right now. Need to check more
recent ARM ARMs for ARMv7 detection.
The detected details get passed on to the kernel, which can use
the pre-detected info for selecting right pagetable format and such.
Copyright removal of Axel done after agreement with Axel @ BeGeistert
that for files that were copy/pasted from x86 arch and then fully
replaced the implementation, removal of original copyright holder is
allowed, since their actual code is gone ;)
This is to make sure all ARM platforms will benefit from planned work on this
MMU/CPU code. The less code duplicated, the better.
Compile-tested for all supported ARM platforms
This also implements the fault handler correctly now, and cleans up the
exception handling. Seems a lot more stable now, no unexpected panics or
faults happening anymore.
* The only implementation that would accept more than 2 TB was the one in
scsi_disk. But even that one was limited to 63 TB.
* Now there is a new utility function devfs_compute_geometry_size() which
does it correctly for sizes up to 2^64 which should be good enough for
quite some time :-)
* This fixes bug #8992.
* For now let's include the same fields in platform_kernel_args
than in the OF version.
* This allows linking the kernel.
Later on we should allow supporting more than a single boot platform,
to have a single kernel per arch.
The lowest 4 bits of the MSR serves as a hint to the hardware to
favor performance or energy saving. 0 means a hint preference for
highest performance while 15 corresponds to the maximum energy
savings. A value of 7 translates into a hint to balance performance
with energy savings.
The default reset value of the MSR is 0. If BIOS doesn't intialize
the MSR, the hardware will run in performance state. This patch
initialize the MSR with value of 7 for balance between performance
and energy savings
Signed-off-by: Fredrik Holmqvist <fredrik.holmqvist@gmail.com>
Renamed {32,64}/int.cpp to {32,64}/descriptors.cpp, which now contain
functions for GDT and TSS setup that were previously in arch_cpu.cpp,
as well as the IDT setup code. These get called from the init functions
in arch_cpu.cpp, rather than having a bunch of ifdef'd chunks of code
for 32/64.
Reused x86 arch_user_debugger.cpp, with a few minor changes to make
the code work for both 32 and 64 bit. Something isn't quite working
right, if a breakpoint is hit the kernel will hang. Other than that
everything appears to work correctly.