we have to avoid using ebx unconditionally in asm constraints for
i386, because gcc 3 and 4 and possibly other simplistic compilers
(pcc?) implement PIC via making ebx a fixed-use register, and disallow
its use for anything else. rather than hard-coding knowledge of which
compilers work (at least gcc 5+ and clang), perform a configure test;
this should give us the good codegen on any new compilers we don't yet
know about.
swapping ebx and edx is kept for 1- and 2-arg syscalls because it
avoids having any spills/stack-frame at all in small functions. for
6-arg, if ebx is directly usable, the complex shuffling introduced in
commit c8798ef974d21c338a7d8d874a402978ffc6168e can be avoided, and
ebp can be loaded the same way ebx is in 5-arg syscalls for compilers
that don't support direct use of ebx.
commit 22e5bbd0deadcbd767864bd714e890b70e1fe1df inlined the i386
syscall mechanism, but wrongly assumed memory operands to the 5- and
6-argument syscall asm would be esp-based. however, nothing in the
constraints prevented them from being ebx- or ebp-based, and in those
cases, ebx and ebp could be clobbered before use of the memory operand
was complete. in the 6-argument case, this prevented restoration of
the original register values before the end of the asm block, breaking
the asm contract since ebx and ebp are not marked as clobbered. (they
can't be, because lots of compilers don't accept these registers in
constraints or clobbers if PIC or frame pointer is enabled).
doing this right is complicated by the fact that, after a single push,
no operands which might be memory operands are usable. if they are
esp-based, the value of esp has changed, rendering them invalid.
introduce some new dances to load the registers. for the 5-arg case,
push the operand that may be a memory operand first, and after that,
it doesn't matter if the operand is invalid, since we'll just use the
newly pushed value. for the 6-arg case, we need to put both operands
in memory to begin with, like the old non-inline code prior to commit
22e5bbd0deadcbd767864bd714e890b70e1fe1df accepted, so that there's
only one potentially memory-based operand to the asm. this can then be
saved with a single push, and after that the values can be read off
into the registers they're needed in.
there's some size overhead, but still a lot less execution overhead
than the old out-of-line code. doing it better depends on a modern
compiler that lets you use ebx and ebp in asm constraints without
restriction. the failure modes on compilers where this doesn't work
are inconsistent and dangerous (on at least some gcc versions 4.x and
earlier, wrong codegen!), so this is a delicate matter. it can be
addressed later if needed.
commit 788d5e24ca19c6291cebd8d1ad5b5ed6abf42665 exposed the breakage
at build time by removing support for 7-argument syscalls; however,
the external __syscall function provided for mips before did not pass
a 7th argument from the stack, so the behavior was just silently
broken.
this has been wrong since the beginning of the microblaze port: the
syscall ABI for microblaze does not align 64-bit arguments on even
register boundaries. commit 788d5e24ca19c6291cebd8d1ad5b5ed6abf42665
exposed the problem by introducing references to a nonexistent
__syscall7. the ABI is not documented well anywhere, but I was able to
confirm against both strace source and glibc source that microblaze is
not using the alignment.
per the syscall(2) man page, posix_fadvise, ftruncate, pread, pwrite,
readahead, sync_file_range, and truncate were all affected and either
did not work at all, or only worked by chance, e.g. when the affected
argument slots were all zero.
analogous to commit efda534b212f713fe2b92a62b06e45f656b763ce for
powerpc. commit 587f5a53bc3a68d80b239ba515d583df690a96df moved the
definition of SO_PEERSEC to bits/socket.h for archs where the SO_*
macros differ.
C99 has ways to support fenv access, but compilers don't implement it
and assume nearest rounding mode and no fp status flag access. (gcc has
-frounding-math and then it does not assume nearest rounding mode, but
it still assumes the compiled code itself does not change the mode.
Even if the C99 mechanism was implemented it is not ideal: it requires
all code in the library to be compiled with FENV_ACCESS "on" to make it
usable in non-nearest rounding mode, but that limits optimizations more
than necessary.)
The math functions should give reasonable results in all rounding modes
(but the quality may be degraded in non-nearest rounding modes) and the
fp status flag settings should follow the spec, so fenv side-effects are
important and code transformations that break them should be prevented.
Unfortunately compilers don't give any help with this, the best we can
do is to add fp barriers to the code using volatile local variables
(they create a stack frame and undesirable memory accesses to it) or
inline asm (gcc specific, requires target specific fp reg constraints,
often creates unnecessary reg moves and multiple barriers are needed to
express that an operation has side-effects) or extern call (only useful
in tail-call position to avoid stack-frame creation and does not work
with lto).
We assume that in a math function if an operation depends on the input
and the output depends on it, then the operation will be evaluated at
runtime when the function is called, producing all the expected fenv
side-effects (this is not true in case of lto and in case the operation
is evaluated with excess precision that is not rounded away). So fp
barriers are needed (1) to prevent the move of an operation within a
function (in case it may be moved from an unevaluated code path into an
evaluated one or if it may be moved across a fenv access), (2) force the
evaluation of an operation for its side-effect when it has no input
dependency (may be constant folded) or (3) when its output is unused. I
belive that fp_barrier and fp_force_eval can take care of these and they
should not be needed in hot code paths.
n32 and n64 ABIs add new argument registers vs o32, so that passing on
the stack is not necessary, so it's not clear why the 5- and
6-argument versions were special-cased to begin with; it seems to have
been pattern-copying from arch/mips (o32).
i've treated the new argument registers like the first 4 in terms of
clobber status (non-clobbered). hopefully this is correct.
the OABI passes these on the stack, using the convention that their
position on the stack is as if the first four arguments (in registers)
also had stack slots. originally this was deemed too awkward to do
inline, falling back to external __syscall, but it's not that bad and
now that external __syscall is being removed, it's necessary.
the inline syscall code is copied directly from powerpc64. the extent
of register clobber specifiers may be excessive on both; if that turns
out to be the case it can be fixed later.
it was never demonstrated to me that this workaround was needed, and
seems likely that, if there ever was any clang version for which it
was needed, it's old enough to be unusably buggy in other ways. if it
turns out some compilers actually can't do the register allocation
right, we'll need to replace this with inline shuffling code, since
the external __syscall dependency is being removed.
this is the first part of a series of patches intended to make
__syscall fully self-contained in the object file produced using
syscall.h, which will make it possible for crt1 code to perform
syscalls.
the (confusingly named) i386 __vsyscall mechanism, which this commit
removes, was introduced before the presence of a valid thread pointer
was mandatory; back then the thread pointer was setup lazily only if
threads were used. the intent was to be able to perform syscalls using
the kernel's fast entry point in the VDSO, which can use the sysenter
(Intel) or syscall (AMD) instruction instead of int $128, but without
inlining an access to the __syscall global at the point of each
syscall, which would incur a significant size cost from PIC setup
everywhere. the mechanism also shuffled registers/calling convention
around to avoid spills of call-saved registers, and to avoid
allocating ebx or ebp via asm constraints, since there are plenty of
broken-but-supported compiler versions which are incapable of
allocating ebx with -fPIC or ebp with -fno-omit-frame-pointer.
the new mechanism preserves the properties of avoiding spills and
avoiding allocation of ebx/ebp in constraints, but does it inline,
using some fairly simple register shuffling, and uses a field of the
thread structure rather than global data for the vdso-provided syscall
code address.
for now, the external __syscall function is refactored not to use the
old __vsyscall so it can be kept, but the intent is to remove it too.
HWCAP_SB - speculation barrier instruction available added in linux
commit bd4fb6d270bc423a9a4098108784f7f9254c4e6d
HWCAP_PACA, HWCAP_PACG - pointer authentication instructions available
(address and generic) added in linux commit
7503197562567b57ec14feb3a9d5400ebc56812f
On s390x, POSIX_FADV_DONTNEED and POSIX_FADV_NOREUSE have different
values than on all other architectures that Linux supports.
Handle this difference by wrapping their definitions in
include/fcntl.h in #ifdef, so that arch/s390x/bits/fcntl.h can
override them.
this will allow the compiler to cache and reuse the result, meaning we
no longer have to take care not to load it more than once for the sake
of archs where the load may be expensive.
depends on commit 1c84c99913bf1cd47b866ed31e665848a0da84a2 for
correctness, since otherwise the compiler could hoist loads during
stage 3 of dynamic linking before the initial thread-pointer setup.
unlike other asm where the baseline ISA is used, these functions are
hot paths and use ISA-level specializations.
call-clobbered vfp registers are saved before calling __tls_get_new,
since there is no guarantee it won't use them. while setjmp/longjmp
have to use hwcap to decide whether to the fpu is in use, since
application code could be using vfp registers even if libc was
compiled as pure softfloat, __tls_get_new is part of libc and can be
assumed not to have access to vfp registers if tlsdesc.S does not.
thus it suffices just to check the predefined preprocessor macros. the
check for __ARM_PCS_VFP is redundant; !__SOFTFP__ must always be true
if the target ISA level includes fpu instructions/registers.
in our memory model, all atomics are supposed to be full barriers;
stores are not release-only. this is important because store is used
as an unlock operation in places where it needs to acquire the waiter
count to determine if a futex wake is needed. at least in the
malloc-internal locks, but possibly elsewhere, soft deadlocks from
missing futex wake (breakable by poking the threads to restart the
syscall, e.g. by attaching a tracer) were reported to occur.
once the malloc lock is replaced with Jens Gustedt's new lock
implementation (see commit 47d0bcd4762f223364e5b58d5a381aaa0cbd7c38),
malloc will not be affected by the issue, but it's not clear that
other uses won't be. reducing the strength of the ordering properties
required from a_store would require a thorough analysis of how it's
used.
to fix the problem, I'm removing the powerpc[64]-specific a_store
definition; now, the top-level atomic.h will implement a_store using
a_barrier on both sides of the store.
it's not clear to me yet whether there might be issues with the other
atomics. it's possible that a_post_llsc needs to be replaced with a
full barrier to guarantee the formal semanics we want, but either way
I think the difference is unlikely to impact the way we use them.
these were overlooked in the declarations overhaul work because they
are not properly declared, and the current framework even allows their
declared types to vary by arch. at some point this should be cleaned
up, but I'm not sure what the right way would be.
this cleans up what had become widespread direct inline use of "GNU C"
style attributes directly in the source, and lowers the barrier to
increased use of hidden visibility, which will be useful to recovering
some of the efficiency lost when the protected visibility hack was
dropped in commit dc2f368e565c37728b0d620380b849c3a1ddd78f, especially
on archs where the PLT ABI is costly.
if __cp_cancel was reached via __syscall_cp, r12 will necessarily
still contain a GOT pointer (for libc.so or for the static-linked main
program) valid for entering __cancel. however, in the case of async
cancellation, r12 may contain any scratch value; it's not necessarily
even a valid GOT pointer for the code that was interrupted.
unlike in commit 0ec49dab6794166d67fae4764ce7fdea42ea6103 where the
corresponding issue was fixed for powerpc64, there is fundamentally no
way for fdpic code to recompute its GOT pointer. so a new mechanism is
introduced for cancel_handler to write a GOT register value into the
interrupted context on archs where it is needed.
maintainer's note: while musl does not use the linux kernel headers,
it does provide these three sys/* headers which do nothing but include
the corresponding linux/* headers, since the sys/* versions are the
ones documented for application use (and they arguably provide
interfaces that are not linux-specific but common to other unices).
these headers should probably not be provided by libc (rather by a
separate package), but as long as they are, use the bits header
framework as an aid to out-of-tree ports of musl for non-linux systems
that want to implement them in some other way.
sys/ptrace.h is target specific, use bits/ptrace.h to add target
specific macro definitions.
these macros are kept in the generic sys/ptrace.h even though some
targets don't support them:
PTRACE_GETREGS
PTRACE_SETREGS
PTRACE_GETFPREGS
PTRACE_SETFPREGS
PTRACE_GETFPXREGS
PTRACE_SETFPXREGS
so no macro definition got removed in this patch on any target. only
s390x has a numerically conflicting macro definition (PTRACE_SINGLEBLOCK).
the PT_ aliases follow glibc headers, otherwise the definitions come
from linux uapi headers except ones that are skipped in glibc and
there is no real kernel support (s390x PTRACE_*_AREA) or need special
type definitions (mips PTRACE_*_WATCH_*) or only relevant for linux
2.4 compatibility (PTRACE_OLDSETOPTIONS).
commit 587f5a53bc3a68d80b239ba515d583df690a96df moved the definition
of SO_PEERSEC to bits/socket.h for archs where the SO_* macros differ
from their standard values, but failed to add copies of the generic
definition for powerpc and powerpc64.
the mode member of struct ipc_perm is specified by POSIX to have type
mode_t, which is uniformly defined as unsigned int. however, Linux
defines it with type __kernel_mode_t, and defines __kernel_mode_t as
unsigned short on some archs. since there is a subsequent padding
field, treating it as a 32-bit unsigned int works on little endian
archs, but the order is backwards on big endian archs with the
erroneous definition.
since multiple archs are affected, remedy the situation with fixup
code in the affected functions (shmctl, semctl, and msgctl) rather
than repeating the same shims in syscall_arch.h for every affected
arch.
add pkey_mprotect, pkey_alloc, pkey_free syscall numbers,
new in linux commits 3350eb2ea127978319ced883523d828046af4045
and 9499ec1b5e82321829e1c1510bcc37edc20b6f38
three ABIs are supported: the default with 68881 80-bit fpu format and
results returned in floating point registers, softfloat-only with the
same format, and coldfire fpu with IEEE single/double only. only the
first is tested at all, and only under qemu which has fpu emulation
bugs.
basic functionality smoke tests have been performed for the most
common arch-specific breakage via libc-test and qemu user-level
emulation. some sysvipc failures remain, but are shared with other big
endian archs and will be fixed separately.
In TLS variant I the TLS is above TP (or above a fixed offset from TP)
but on some targets there is a reserved gap above TP before TLS starts.
This matters for the local-exec tls access model when the offsets of
TLS variables from the TP are hard coded by the linker into the
executable, so the libc must compute these offsets the same way as the
linker. The tls offset of the main module has to be
alignup(GAP_ABOVE_TP, main_tls_align).
If there is no TLS in the main module then the gap can be ignored
since musl does not use it and the tls access models of shared
libraries are not affected.
The previous setup only worked if (tls_align & -GAP_ABOVE_TP) == 0
(i.e. TLS did not require large alignment) because the gap was
treated as a fixed offset from TP. Now the TP points at the end
of the pthread struct (which is aligned) and there is a gap above
it (which may also need alignment).
The fix required changing TP_ADJ and __pthread_self on affected
targets (aarch64, arm and sh) and in the tlsdesc asm the offset to
access the dtv changed too.
in thumb mode, r7 is the ABI frame pointer register, and unless frame
pointer is disabled, gcc insists on treating it as a fixed register,
refusing to spill it to satisfy constraints. unfortunately, r7 is also
used in the syscall ABI for passing the syscall number.
up til now we just treated this as a requirement to disable frame
pointer when generating code as thumb, but it turns out gcc forcibly
enables frame pointer, and the fixed register constraint that goes
with it, for functions which contain VLAs. this produces an
unacceptable arch-specific constraint that (non-arm-specific) source
files making syscalls cannot use VLAs.
as a workaround, avoid r7 register constraints when producing thumb
code and instead save/restore r7 in a temp register as part of the asm
block. at some point we may want/need to support armv6-m/thumb1, so
the asm has been tweaked to be thumb1-compatible while also
near-optimal for thumb2: it allows the temp and/or syscall number to
be in high registers (necessary since r0-r5 may all be used for
syscalll args) and in thumb2 mode allows the syscall number to be an
8-bit immediate.
__ARM_ARCH_6ZK__ is a gcc specific historical typo which may not be
defined by other compilers.
https://gcc.gnu.org/ml/gcc-patches/2015-07/msg02237.html
To avoid unexpected results when building for ARMv6KZ with clang, the
correct form of the macro (ie 6KZ) needs to be tested. The incorrect
form of the macro (ie 6ZK) still needs to be tested for compatibility
with pre-2015 versions of gcc.
Provide an ARM specific a_ctz_32 helper function for architecture
versions for which it can be implemented efficiently via the "rbit"
instruction (ie all Thumb-2 capable versions of ARM v6 and above).
Update atomic.h to provide a_ctz_l in all cases (atomic_arch.h should
now only provide a_ctz_32 and/or a_ctz_64).
The generic version of a_ctz_32 now takes advantage of a_clz_32 if
available and the generic a_ctz_64 now makes use of a_ctz_32.