* Fixes sharing semantics, so non-shared semaphores in non-shared
memory do not become shared after a fork.
* Adds two new system calls: _user_mutex_sem_acquire/release(),
which reuse the user_mutex address-hashed wait mechanism.
* Named semaphores continue to use traditional sem_id semaphores.
If it was already determined that the memory is within the kernel
stack, a simple memcpy is enough.
This allows capturing kernel stack traces in situations where a fault
handler cannot be installed (i.e. where one is already installed).
The concept of entry point in COFF is actually different than in ELF.
In COFF, the entry point is actually a "descriptor" (pointer) to the actual
start code. So we patch the entry point address when calling objcopy.
Now my old Performa 5400/180 actually starts the loader correctly \o/
* We don't change the data cache (and other) settings.
Interesting to know their state on each platform.
* Not used by default as it needs called after
serial-init in u-boot
* The changes for pi2 support led to the virtual addresses overlapping
with the page table again on the beagle, because the kernel address
space overlaps with the physical RAM identity mapped. Try to find a
memory range in a way that will work in both cases.
The stack base and end addresses are stored in TLS slots that are
prepared when enabling stack traces and filled in lazily on use for
each thread. This avoids the need of calling get_thread_info to get
these values.
Also simplifies the code somewhat due to proper frame skipping support.
It can be used to get a stack trace of the current thread. Note that
this works by walking frame pointers and will not produce anything
useful if an application is compiled with the frame pointers omitted.
The stack base and end addresses have to be provided as arguments and
are used to check that the frame pointers fall within that range. These
values are thread specific and can be retrieved with get_thread_info().
No other sanity checks (like checking for loops in the linked list) are
done.
This is a simplified rewrite of the stack trace code from the kernel
debugger.
As this code is common to x86 and x86_64 but is not generic across
architectures I introduced x86_common as a directory to put such
sources.
Extend the get_nearest_symbol_at_address() private runtime_loader
export to include imageName and exactMatch arguments.
The imageName holds the SONAME of the image, if available, so cannot
neccessarily be extracted from the image path.
Whether or not there was an exact match, i.e. the symbol with its size
contains the address, is now returned in exactMatch.
This adds libroot_guarded.so to the HaikuDevel package. It is the same
as libroot_debug with the debug heap swapped out for the guarded heap.
The guarded heap has some useful features that make it desirable to use
while having the disadvantage of a large memory and address space
overhead which make it unusable in some situations. Therefore the
guarded heap cannot simply replace the debug heap but should still be
made available. As the heap init needs to happen even before having
environment variables, the heap to use can not be chosen dynamically.
Exposing them through their own libraries is the next best thing.
When enabled (using heap_debug_dump_allocations_on_exit(true) or
MALLOC_DEBUG=e) this causes a dump of all remaining allocations when
libroot_debug is unloaded. It uses terminate_after to be called as
late as possible.
When combined with alloc stack traces this makes for a nice if a bit
crude leak checker. Note that a lot of allocations usually remain
even at that stage due to statically, lazyly and globally allocated
stuff from the various system libraries where it isn't necessarily
worth the overhead to free them when the program terminates anyway.
When configured to do so (using heap_debug_set_stack_trace_depth(depth)
or MALLOC_DEBUG=s<depth>) the guarded heap now captures stack traces on
alloc and free.
A crash due to hitting a guard page or an already freed page now dumps
these stack traces. In the case of use-after-free one can therefore see
both where the allocation was done and where it was freed.
Note that there is a hardcoded maximum stack trace depth of 50 and that
the alloc stack trace takes away space from the free stack trace which
uses up the rest of that maximum.
The get_stack_trace syscall generates a stack trace using the kernel
debugging facilities and copies the resulting return address array to
the preallocated buffer from userland. It is only possible to get a
stack trace of the current thread.
The lookup_symbol syscall can be used to look up the symbol and image
name corresponding to an address. It can be used to resolve symbols
from a stack trace generated by the get_stack_trace syscall. Only
symbols of the current team can be looked up. Note that this uses
the symbol lookup of the kernel debugger which does not support lookup
of all symbols (static functions are missing for example).
This is meant to be used in situations where more elaborate stack trace
generation, like done in the userland debugging helpers, is not possible
due to constraints.
For it to be available we build malloc_debug in C++11 mode when not
using GCC2. Note that max_align_t is not in the std namespace in GCC4
versions prior to GCC 4.9. The extra "using namespace std" is there to
be forward compatible once we update.
These were here for debugging purposes, as often it is a sign of
inconsistencies. However, for USB disks this is a normal occurence
when someone janks out of the device without unmounting first.
Make sure we log these cases though, as it still helps debugging.
Fix sponsered by http://www.izcorp.com
This allows for something similar as was implemented in 217f090 but
makes it optional and configurable.
The MALLOC_DEBUG environment variable now can take "a<size>" to set
the default alignment to the specified size. Note that not all
alignments may be supported depending on the heap implementation.
This reverts commit 217f090f9e.
At least for the guarded heap this completely defeats the purpose. If
software requires a certain alignment it should request it using
memalign explicitly instead of assuming it.
* based on current glibc sysdeps/nptl/bits/libc-lock.h file.
* include missing headers which were previously included by libc-lock.h.
* This fixes#11182.
* drop my fdt tests
* we have to call fdt parsing code *after* cpu_init (why?)
* pass fdt pointer to all FDT support calls to avoid confusion
once we get into the kernel land
* look for PL011 compatible uart and use it
* Add some saftey checks to serial putc code to avoid null*
* fdt_node_check_compatible returns 0 on success not 1
* fdt_get_device_reg needs to add the SOC base to the result
* fdt_get_device_reg might need to add the second range cell
instead of reg?
The comparison to decide whether or not to reuse the name buffer when
renaming a rootfs entry was reversed. For renames where the new name
was longer than the old one this resulted in writing beyond the name
buffer and corrupting random kernel memory.
A likely candidate for this to be triggered was when a audio cd was
renamed due to a CDDB lookup, as the placeholder "Audio CD" is quite
short and the actual CD name is usually longer.
Fixes: #10259. Possibly fixes the related #9528 and #9858.
* Move more code into fdt_support
* We now can query FDT registers based on name or alias
* Return addr_t where it makes sense
* Copyright change ok'ed by mmu_man
* This isn't be best long-term place for this code,
will likely move to some generic FDT support code.
* We pass a path like "/soc/gpio" and get back the
base physical register address in memory minus
the range offset.
* The existing code set the first available pa and va to
the end of the page dirctory.
* The arm mmu code was attempting to identity map (va==pa)
the memory, but also wanted memory to be in kernel space.
This allocation method isn't possible on all boards
(including the pi)
* We're adjusting the dynamic ram to KERNEL_LOAD_BASE
plus the max size of the kernel. (which is what most
other platforms are doing)
* The Raspberry pi 2 uses a new SoC which differs slightly
from the Raspberry Pi 1.
* Someday these two board targets could go away when we get
FDT support.
* To while there was some compatibility between
BCM2708 and BCM2805, it makes the BCM2806 changes
more confusing. We don't have any valueable BCM2708
targets.
I misread the condition and broke this in 0687a01. Thanks to Axel for
reviewing!
* Refactor the code again to move all the error checking at the top of
the function, to make it easier to read.
The API allows to create driver settings which are not added to the
global list, however those were left partially uninitialized, and there
was no way to cleanly delete them.
Tag such unattached settings with a ref_count of -1, and have
delete_driver_settings check for this and handle the case correctly.
Note: #10494 comment 2 says the settings for packagefs shouldn't be
added to the kernel driver settings list, which is why I went with this
solution. An alternative would be always using the list and the
reference counting, but I don't know what the consequences are.
Fixes#10494.
* This is not allowed by strdup POSIX specs and GCC may use its builtin
strdup which doesn't check for it.
* also refactor parse_driver_settings_string to create the
settings_handle using settings_new, to reduce code duplication.
Sorry, I can't test all cases when building from Haiku.
Including <new> after the fs shell wrapper makes the compiler fail
because new needs a size_t argument (not an fssh_size_t). But including
it before also fails because it includes C++ typedefs without the fssh
wrapper, leading to conflicts.
Undefining size_t just for the include of <new> isn't very clean, but
seems to work. new gets a size_t argument as it should and the other
typedefs aren't conflicting.
* Add an fs-shell compatible version of BOpenHashTable in the fs_shell
to keep it working. The header is renamed to KOpenHashTable to avoid a
conflict with the OpenHashTable.h available in private/shared which is
not API compatible.
- When normalizing paths of the preloaded modules to their final mounted
path, remove them from the hash table before updating their path. Otherwise,
the remove would fail due to the hash no longer matching, which in turn
would cause the code in question to introduce an infinite loop in the
hash table's internal link list due to manually rewriting the next link.
* offsetof is not allowed on non-POD types so we need to use
offset_of_member (gcc2 accepts offsetof, and C++11 relaxed the
constraints on where it is allowed so it should work there too)
* we have offset_of_member as a workaround until we switch to C++11,
move it from khash (which is soon to be removed) to list.h which is the
other place where it is used (for this one single call in our whole
codebase)
Also fix a typo in vfs.cpp.
As a result of the refactoring for OpenHashTable, the iterator semantics
have changed a bit, such that the end of the table is no longer signalled
by the iterator returning NULL. This wasn't taken into account during
refactoring, which would lead to various places returning the last item
in the list in the case where no matching item was found, causing e.g.
drivers not to be loaded properly. This fixes the boot hang regressions
introduced in hrev48640.
Could lead to wrongly setting the TYPE_MINUTE flag for an invalid (>59)
number of minutes. Harmless, as that flag is never used.
For completeness, also set the flag for seconds (also never used).
Fixes#11552.
gcc2 was relying on the c99 functions being there, but they are not in
the std namespace.
* Disable the C99 functions and macros in C++ mode
* Redefine them as inline functions in cmath in the std namespace.
Fixes#7396.
I had a KDL when trying to read an audio CD which apparently uses this
as a copy protection scheme.
I don't know if this is the right place to do this, the KDL would happen
further down when the intel partitionning system or bfs would try to
read data from the disk at offset -2048.
While the partitioning system does publish partitions as block
devices and report their size in stat(), the old BeOS-style
drivers have no means of reporting it this way.
So we fall back to ioctl(B_GET_GEOMETRY) to find out the size.
This avoids having to copy the strings.
For now we disregard argv[] as it is not remapped before
being used in add_stage2_driver_settings() and is not used
by the linux entry point.
This makes the overo loader panic at the same place as
the beagle xm one now, even though it fails to display
anything with the default RAM size since we allocate
the framebuffer beyond 128MB...
* Always include last caller and lock value on both UP and MP path.
* Change lock value printing to hex format, as 0xdeadbeef is more
obvious than its decimal counterpart.
While the NetBSD entry point is handy as we can use a single uImage
with all 3 blobs, it bypasses U-Boot's own patching of the FDT since
it's not visible to it, so we won't get the RAM size and other things
through it.
CreateThreadEvent::DoDPC() missed a reference release to balance the
acquired reference before queuing the DPC, resulting in the
CreateThreadEvent objects being leaked.
This also removes the destructor that tried to cancel the DPC. Since
the class is reference counted and only destroyed when the DPC has
run and released the last reference, this didn't make much sense.
The signal to the team/thread is only actually sent in a deferred
procedure. To ensure that the team/thread stays valid between the DPC
being queued and it actually running, we need to acquire a reference.
Fixes#11390, where the DPC was run after the team was already
destroyed.
This introduces InterruptController and HardwareTimer classes to
handle the SoC specific implementations of timers and ints for
the ARM platform.
These could be improved and moved to a more 'generic' level once
we're confident they are 'good enough'.
NOTE: The OMAP timer implementation is fully untested and probably
completely non-functional....
If we find an FDT (either from uImage or otherwise) we make sure
we map it after mmu_init() and use kernel_args to pass it to the
kernel (so it is available at all times there).
* On UEFI, pages are allocated top-down; previously,
VM would fail to allocate early pages due to
running into pages allocated at the top and
assume it had run out of pages to map.
Signed-off-by: Jessica Hamilton <jessica.l.hamilton@gmail.com>
With packagefs potentially opening quite a few packages the default of
256 slots is a bit tight. It's 4096 now, which should be safe for a
while, but we might want to consider resizing the table dynamically and
probably even switching to another algorithm for allocating the slots.
Should fix#11328.
* VMArea::AddWaiterIfWired(): Replace the ignoreRange argument by a
flags argument and introduce (currently only) flag
IGNORE_WRITE_WIRED_RANGES. If specified, ranges wired for writing
are ignored. Ignoring just a single specified range doesn't cut it
in vm_soft_fault(), and there aren't any other users of that feature.
* vm_soft_fault(): When having to unmap a page of a lower cache, this
page cannot be wired for writing. So we can safely ignore all
writed-wired ranges, instead of just our own. We even have to do that
in case there's another thread that concurrently tries to write-wire
the same page, since otherwise we'd deadlock waiting for each other.
As Axel pointed out, B_BAD_DATA is not the correct code here. B_BUSY
could be used but I wantd a code different from the existing one for
"partition already being initialized".
When we encounter a wired page that we'd have to unmap to map our newly
allocated one, we need to get rid of the latter before unlocking
everything and waiting for the wired page. Otherwise we'd leave things
in an inconsistent state (a page from an upper cache shadowing a mapped
page from a lower cache).
... in case we'd need to unmap a page that is wired.
Fixes the immediate issue of #10977. There's a problem remaining (as
discussed in comment 1): If two threads want to wire the same page at
the same time (which led to the assertion being triggered), they will
now deadlock, waiting for each other to remove the pre-registered
VMAreaWiredRange.
The thread that is being [un]scheduled already has its time_lock locked
in {stop|continue}_cpu_timers(). When updating the TeamTimeUserTimer,
the team is asked for its cpu time. Team::CPUTime() then iterates the
threads of the team and locks the time_lock of the thread again.
This workaround passes a possibly locked thread through the relevant
functions so Team::CPUTime() can decide whether or not a thread it
iterates needs to be locked or not.
This works around #11032 and its duplicates #11314 and #11344.
when uninitializing a partition or a disk (removing the partition
table), check that all partitions from that table are unmounted, as they
are about to become invalid.
Fixes#8827.
The "2nd" assert that we always ran into was due to bootloader mappings
still being active after VM init. Turns out we missed a call in the
architecture specific code for cleaning this up.
Many thanks to Ingo for spending the time to figure this out!
When a file descriptor is closed between being selected and adding the
select info to its IO context, the select info needs to be cleaned up.
This is done by deselect_select_infos() which unconditionally also put
the select_sync associated with the infos. In this special case we do
not yet hold a reference to the select_sync however, so avoid putting
the corresponding sync object.
Fixes#11098, #10763 and #10230.
QEMU was crashing since when setting the DSS divider we were _clearing_
the TV divider, and QEMU did not check for a divide by zero.
This "fixes" the QEMU crash and gets us a working framebuffer on Beagle ;)
* Added VFS helper function check_access_permissions() that combines
several partially correct versions to the one true version (tm).
* All but BFS (since recently) missed the S_IXOTH for root on directories,
and all but packagefs missed proper group handling.