This helps when debugging, since when a driver/module causes a crash
while registering with the device manager, you can actually look at
the device manager state ;-)
The previously used method for programming the timer did not take
into account that our timespec is 64bit while the register we poke
it into is 32 bit. Since the PXA (SoC in Verdex target) has a limited
scale of resolution (us,ms,second) we dynamicly determine the one
that we can most closely match, and set that.
For f.ex. snooze to work however, we also need system_time to work.
The current implementation uses a system timer at microsecond
resolution to keep track of time.
Although the code is far from perfect, committing it now before
it gets lost, since I'm working on the infrastructure code
to properly factor out the SoC specific code out of the core
ARM architecture code (so the kernel can support more then
our poor old Verdex QEMU target ;))
The "blobs" in a U-Boot uimage are aligned at 4 bytes, which we
did not take into account. Found this when adding a 3rd blob
containing the Flattened Device Tree for ARM.
Turns out dd on MacOS does not like '1M' as size descriptor, but
wants '1m'. To prevent us breaking Linux builds (as it does not
accept 1m), just use the actual number of bytes explicitely instead.
This helps when debugging, since when a driver/module causes a crash
while registering with the device manager, you can actually look at
the device manager state ;-)
The previously used method for programming the timer did not take
into account that our timespec is 64bit while the register we poke
it into is 32 bit. Since the PXA (SoC in Verdex target) has a limited
scale of resolution (us,ms,second) we dynamicly determine the one
that we can most closely match, and set that.
For f.ex. snooze to work however, we also need system_time to work.
The current implementation uses a system timer at microsecond
resolution to keep track of time.
Although the code is far from perfect, committing it now before
it gets lost, since I'm working on the infrastructure code
to properly factor out the SoC specific code out of the core
ARM architecture code (so the kernel can support more then
our poor old Verdex QEMU target ;))
The "blobs" in a U-Boot uimage are aligned at 4 bytes, which we
did not take into account. Found this when adding a 3rd blob
containing the Flattened Device Tree for ARM.
Turns out dd on MacOS does not like '1M' as size descriptor, but
wants '1m'. To prevent us breaking Linux builds (as it does not
accept 1m), just use the actual number of bytes explicitely instead.
Fixing the autoconf test: attempt to create file in place of already
existing symlink. On error exit put_vnode was called explicitly before
returning error. The second, implicit call to put_vnode was issued on
destroying the VNodePutter instance that references the same vnode. At
this time the vnode has references count equal to 0 so corresponding
panic was executed. Great thanks to Ingo for pointing it out!
Fixes#9140.
* adding zlib to the kernel unfortunately introduces a cyclic dependency
with respect to the zlib, haiku and haiku_devel packages (AFAICS)
* circumvent this by building kernel_zlib as a static library again,
this time with PIC, such that it can be used by kernel add-ons
This patch fix one of the compatibility issues mentioned in #3255. It
allows applications to call bind() or connect() passing an sockaddr_un
structure with a pathname that is not null-terminated.
Some systems did not require pathname in sockaddr_un::sun_path to be
null-terminated, instead the end of the string is determined by the size
of the structure passed as an argument of bind() or connect().
The standard is a bit vague in this matter but suggest that the path
should be null-terminated and the functions bind() and connect() should
be given sizeof(sockaddr_un) as a structure size.
* Both filesystems used to link to a static kernel-zlib, which
was being built with -fno-pic. This doesn't work on x86_64 as the
filesystem add-ons are meant to be relocatable, which requires their
code to be compiled as position independent.
Solve that by moving zlib into the kernel, so any add-on can just use
it from there (packagefs is mandatory, so we can't really do without
zlib anyway).
* the Virtio RNG PCI device has the class 0, so can't be found using usual
paths. Add 0 to _AlwaysRegisterDynamic() and "busses/virtio" in _GetNextDriverPath()
for non generic drivers to help finding virtio_pci.
* The RNG Virtio device is generic and needs "busses/random" to find virtio_rng.
* Mostly useful for virtualization at the moment. Works in QEmu.
* Can be enabled by safemode settings/menu.
* Please note that x2APIC normally requires use of VT-d interrupt remapping feature
on real hardware, which we don't support yet.
* fix unitialized variables in __printf_fphex() in case of architectures
without support for long double - this triggered unreliable results
or crashes when using %La or %La on x86
* activate long double implementation in use for x86_64 for x86, too,
as they share the long double format
(cherry picked from commit d1716b277c)
* fix unitialized variables in __printf_fphex() in case of architectures
without support for long double - this triggered unreliable results
or crashes when using %La or %La on x86
* activate long double implementation in use for x86_64 for x86, too,
as they share the long double format
* For the comparison cast the character parameter to char as required
by the spec.
* Fix broken handling of strrchr(..., 0). It is supposed to return a
pointer to the end of the string. It did return a pointer to the
start.
* All packaging architecture dependent variables do now have a
respective suffix and are set up for each configured packaging
architecture, save for the kernel and boot loader variables, which
are still only set up for the primary architecture.
For convenience TARGET_PACKAGING_ARCH, TARGET_ARCH, TARGET_LIBSUPC++,
and TARGET_LIBSTDC++ are set to the respective values for the primary
packaging architecture by default.
* Introduce a set of MultiArch* rules to help with building targets for
multiple packaging architectures. Generally the respective targets are
(additionally) gristed with the packaging architecture. For libraries
the additional grist is usually omitted for the primary architecture
(e.g. libroot.so and <x86>libroot.so for x86_gcc2/x86 hybrid), so that
Jamfiles for targets built only for the primary architecture don't
need to be changed.
* Add multi-arch build support for all targets needed for the stage 1
cross devel package as well as for libbe (untested).
devfs_io() can't fall back to calling vfs_synchronous_io(), if the
device driver doesn't support handling requests asynchronously. The
presence of the io() hook leads the VFS (do_iterative_fd_io()) to
believe that asynchronous handling is supported and set a
finished-callback on the request which calls the io() hook to start the
next chunk. Thus, instead of iterating through the request in a loop
the iteration happens recursively. For sufficiently fragmented requests
the stack may overflow (ticket #9900).
* Introduce a new vnode operation supports_operation(). It can be called
by the VFS to determine whether a present hook is actually currently
supported for a given vnode.
* devfs: implement the new hook and remove the fallback handling in
devfs_io().
* vfs_request_io.cpp: use the new hook to determine whether the io()
hook is really supported.
Although syscalls are done through SYSCALL and therefore don't actually
have an interrupt number, set it to 99 (the syscall vector on 32-bit)
in the iframe so that a syscall frame can be identified. Also added
vector/error_code to x86_64_debug_cpu_state for Debugger to use, not
sure why I didn't put them there in the first place.
* Add a VMArea* version of AddArea().
* AddAreaCacheAndLock(): Use the new AddArea() version. This not only
saves the ID hash table lookup, but also fixes a race condition with
delete_area(). delete_area() removes the area from the hash before
removing it from its cache, so iterating through the cache's areas
can turn up an area that no longer is in the hash. In that case we
would fail immediately. The new AddArea() won't fail in this
situation, though.
Fixes#9686: vm_copy_area() could fail for the "commpage" area. That's
an area all teams share, so any team terminating while another one was
fork()ing could trigger it.
We the meta data area couldn't be allocated in any of the supported
(reattachable) places, just use a static allocation. The tracing feature
wouldn't be available at all in such a case.
In debug_cleanup(), if the debug syslog buffer is disabled (the default when
KDEBUG_LEVEL is 0), then a new buffer is allocated with kernel_args_malloc().
This is done after kernel_args addresses have been converted to 64-bit, so
the address the kernel gets will be 32-bit, resulting in the page fault seen
in #9842. Fixed by moving the call to debug_cleanup() to before
convert_kernel_args().
... in case of team creation error. Once assigned to Team::io_context
the Team object takes responsibility of the I/O context object and
releases the reference on destruction. load_image_internal() and
fork_team() were thus releasing one reference too many.
Fixes#9851.
* In case the locale backend could not be loaded, these functions (and
their reentrant counterparts) just returned an error. So we reactivate
parts of the BSD-/Olson-implementation in localtime_fading_out.c in
order to use them as fallback.
* Cleanup localtime_fading_out.c (remove a lot of unused cruft).
* all those functions need to return the given wc unchanged in case of
error, not 0
* towctrans() didn't actually look at the requested transition, but
always acted as if _ISlower was given
* when an executable with a different ABI is being loaded and some
needed image isn't found, don't retry using the standard ABI folders
of the system - those are now considered incompatible
* ReaderImplBase:
- Add virtual CreateCachedHeapReader() which can create a cached
reader based on the given heap reader.
- Rename HeapReader() to RawHeapReader() and add HeapReader() for the
cached heap reader.
- Add DetachHeapReader() to allow a clients to remove the heap
reader(s) after deleting the ReaderImplBase object.
* packagefs:
- Add CachedDataReader class, which wraps a given
BAbstractBufferedDataReader and provides caching for it using a
VMCache. The implementation is based on the IOCache implementation.
- Use CachedDataReader to wrap the heap reader. For file data that
means they are cached twice -- in the heap reader cache and in the
file cache -- but due to the heap reader using a VMCache as well,
the pages will be recycled automatically anyway. For attribute data
the cache should be very helpful, since they weren't cached at all
before.
* Add flags parameter to Init() of BPackageReader and friends.
* Introduce flag B_HPKG_READER_DONT_PRINT_VERSION_MISMATCH_MESSAGE and
don't print a version mismatch error when given.
* package extract/list: Use the new flag.
* Add new package haiku_loader.hpkg and move haiku_loader there. The
package is built without compression, so that the stage 1 boot loader
has a chance of loading it.
* Adjust the stage 1 boot loader to load the haiku_loader package and
relocate the boot loader code accordingly.
Instead of handling compression for individual file/attribute data we
do now compress the whole heap where they are stored. This
significantly improves compression ratios. We still divide the
uncompressed data into 64 KiB chunks and use a chunk offset array for
the compressed chunks to allow for quick random access without too much
overhead. The tradeoff is a limited possible compression ratio -- i.e.
we won't be as good as tar.gz (though surprisingly with my test
archives we did better than zip).
The other package file sections (package attributes and TOC) are no
longer compressed individually. Their uncompressed data are simply
pushed onto the heap where the usual compression strategy applies. To
simplify things the repository format has been changed in the same
manner although it doesn't otherwise use the heap, since it only stores
meta data.
Due to the data compression having been exposed in public and private
API, this change touches a lot of package kit using code, including
packagefs and the boot loader packagefs support. The latter two haven't
been tested yet. Moreover packagefs needs a new kind of cache so we
avoid re-reading the same heap chunk for two different data items it
contains.
* When looking for a place for new area the size of the area to be
inserted instead of the next area size was used to check whether
we are already past the upper bound.
* There was an attempt to insert area even if we were past the
upper bound.
* If at least one image is either B_HAIKU_ABI_GCC_2_ANCIENT or
B_HAIKU_ABI_GCC_2_BEOS almost all areas are marked as executable.
* B_EXECUTE_AREA and B_STACK_AREA are made public. The former is enforced since
the introduction of DEP and apps need it to correctly set area protection.
The latter is currently needed only to recognize stack areas and fix their
protection in compatibility mode, but may also be useful if an app wants
to use sigaltstack from POSIX API.
* Add optional packages Zlib and Zlib-devel.
* Simplify the build feature section for zlib and also extract the
source package.
* Replace all remaining references to the zlib instance in the tree and
remove it.
Casting the difference of the two off_t values to size_t may truncate
the result. Doing so before the comparison will therefore break it.
Instead cast the size to off_t to get around the signed versus unsigned
integer expression comparison and then cast the result of the comparison
to size_t again. Should fix#9714.
This reverts commit f7176b0ee5. Citing Ingo:
"off_t is the correct type to use for addressing pages in a cache/file,
which page_num_t should only be used for physical pages." I'll see how to
fix the GCC 4.7 warnings differently :)
* error message: error: cannot bind packed field
'args->kernel_args::platform_args.platform_kernel_args::apm' to 'apm_info&'
* the reason would be that the reference doesn't have alignment information anymore.
* changed the reference to const for read access, and use the long form for setting a field.
- Instead of implicitly registering and unregistering a service
instance on construction/destruction, DefaultNotificationService
now exports explicit Register()/Unregister() calls, which subclasses
are expected to call when they're ready.
- Adjust all implementing subclasses. Resolves an issue with deadlocks
when booting a DEBUG=1 build.
* Add "bool kernel" parameter to vfs_entry_ref_to_path(), so it can be
specified for which I/O context the entry ref shall be translated.
* _user_entry_ref_to_path(): Use the calling team's I/O context instead
of the kernel's. Fixes the bug that in a chroot the syscall would
return a path for outside the chroot.
* make runtime_loader a dynammically linked object
* add kernel support for loading user images that need to be relocated
* load runtime_loader at random address
Currently there are two generators. The fast one is the same one the scheduler
is using. The standard one is the same algorithm libroot's rand() uses. Should
there be a need for more cryptographically PRNG MD4 or MD5 might be a good
candidates.
This address specification is actually not needed since PIC images can be
located anywhere. Only their size is restriced but that is the compiler and
linker concern. Thanks to Alex Smith for pointing that out.
B_ALREADY_WIRED, which was erroneously passed for the area protection
parameter to map_backing_store(), has the value 7 which implies user
readable and writable. Hence the address ranges around 0xdeadbeef and
0xcccccccc could actually be read and written from anywhere.
On some 64 bit architectures program and library images have to be mapped in
the lower 2 GB of the address space (due to instruction pointer relative
addressing). Address specification B_RANDOMIZED_IMAGE_ADDRESS ensures that
created area satisfies that requirement.
Placing commpage and team user data somewhere at the top of the user accessible
virtual address space prevents these areas from conflicting with elf images
that require to be mapped at exact address (in most cases: runtime_loader).
This patch introduces randomization of commpage position. From now on commpage
table contains offsets from begining to of the commpage to the particular
commpage entry. Similary addresses of symbols in ELF memory image "commpage"
are just offsets from the begining of the commpage.
This patch also updates KDL so that commpage entries are recognized and shown
correctly in stack trace. An update of Debugger is yet to be done.
Set execute disable bit for any page that belongs to area with neither
B_EXECUTE_AREA nor B_KERNEL_EXECUTE_AREA set.
In order to take advanage of NX bit in 32 bit protected mode PAE must be
enabled. Thus, from now on it is also enabled when the CPU supports NX bit.
vm_page_fault() takes additional argument which indicates whether page fault
was caused by an illegal instruction fetch.
x86_userspace_thread_exit() is a stub originally placed at the bottom of
each thread user stack that ensures any thread invokes exit_thread() upon
returning from its main higher level function.
Putting anything that is expected to be executed on a stack causes problems
when implementing data execution prevention. Code of x86_userspace_thread_exit()
is now moved to commpage which seems to be much more appropriate place for it.
When mmap() is invoked without specifying address hint B_RANDOMIZED_ANY_ADDRESS
is used.
Otherwise, unless MAP_FIXED flag is set (which requires mmap() to return an area
positioned exactly at given address), B_RANDOMIZED_BASE_ADDRESS is used.