The function is a nop for user mode, so just remove them.
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-Id: <1426496617-10702-3-git-send-email-famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
New threads always point at the same env which is incorrect and usually
leads to a crash.
Signed-off-by: Leon Alrae <leon.alrae@imgtec.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
start/end_exclusive() need be pairs, except the start_exclusive() in
stop_all_tasks() which is only used by force_sig(), which will be abort.
So at present, start_exclusive() in stop_all_task() need not be paired.
queue_signal() may call force_sig(), or return after kill pid (or queue
signal). If could return from queue_signal(), stop_all_task() would not
be called in time, the next end_exclusive() would be issue.
So in arm_kernel_cmpxchg64_helper() for ARM, need remove end_exclusive()
after queue_signal(). The related commit: "97cc756 linux-user: Implement
new ARM 64 bit cmpxchg kernel helper".
Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
When support was added for TrustZone to ARM CPU emulation, we failed
to correctly update the support for the linux-user implementation of
the get/set_tls syscalls. This meant that accesses to the TPIDRURO
register via the syscalls were always using the non-secure copy of
the register even if native MRC/MCR accesses were using the secure
register. This inconsistency caused most binaries to segfault on startup
if the CPU type was explicitly set to one of the TZ-enabled ones like
cortex-a15. (The default "any" CPU doesn't have TZ enabled and so is
not affected.)
Use access_secure_reg() to determine whether we should be using
the secure or the nonsecure copy of TPIDRURO when emulating these
syscalls.
Signed-off-by: Mikhail Ilyin <m.ilin@samsung.com>
Message-id: 1426505198-2411-1-git-send-email-m.ilin@samsung.com
[PMM: rewrote commit message to more clearly explain the issue
and its consequences.]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This reverts commit b8a173b25c, reversing
changes made to 5de090464f.
(I applied this pull request when I should not have done so, and
am now immediately reverting it.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This was the only caller of cpu_init() that was not checking for NULL
yet.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
In main.c, all SIG* should be TARGET_SIG*, since the relevant functions
(queue_signal() and gdb_handlesig()) expect TARGET_SIG*.
The corresponding vi command is "1,$ s/\<SIG/TARGET_SIG/g".
Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
The function end_exclusive() isn't used on all targets; mark it as
such to avoid a clang warning.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
The start_exclusive() infrastructure is used on all target
architectures, even if only to do the "stop all CPUs before
dumping core" in force_sig(), so be consistent and call
cpu_exec_start/end in the main loop of every target.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
In the m68k cpu_loop() use get_user_u16 to read the immediate for
the simcall rahter than lduw, to bring it into line with how other
archs do it and to remove another user of the ldl family of functions.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1421334118-3287-8-git-send-email-peter.maydell@linaro.org
The TARGET_HAS_ICE #define is intended to indicate whether a target-*
guest CPU implementation supports the breakpoint handling. However,
all our guest CPUs have that support (the only two which do not
define TARGET_HAS_ICE are unicore32 and openrisc, and in both those
cases the bp support is present and the lack of the #define is just
a bug). So remove the #define entirely: all new guest CPU support
should include breakpoint handling as part of the basic implementation.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1420484960-32365-1-git-send-email-peter.maydell@linaro.org
Replace the 20Kc original MIPS64 ISA processor used for 64-bit user
emulation with the 5KEf processor that implements the MIPS64r2 ISA,
complementing the choice of the 24Kf processor for 32-bit emulation.
Signed-off-by: Maciej W. Rozycki <macro@codesourcery.com>
Reviewed-by: Leon Alrae <leon.alrae@imgtec.com>
Signed-off-by: Leon Alrae <leon.alrae@imgtec.com>
When EL3 is running in AArch32 (or ARMv7 with Security Extensions)
FCSEIDR, CONTEXTIDR, TPIDRURW, TPIDRURO and TPIDRPRW have a secure
and a non-secure instance.
Signed-off-by: Fabian Aggeler <aggelerf@ethz.ch>
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1416242878-876-25-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
On AArch64 the si_addr field of siginfo_t is truncated to 32 bits
because the fault address passes through an uint32_t variable.
Follow Peters suggestion and drop the uint32_t variable
since its only used once in the Aarch64 loop.
Reported-by: Amanieu d'Antras <amanieu@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
This patch introduces the -seed command line option and the
QEMU_RAND_SEED environment variable for setting the random seed, which
is used for the AT_RANDOM ELF aux entry.
Signed-off-by: Magnus Reftel <reftel@spotify.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
The current implementation of watchpoints requires that they
have a power of 2 length which is not greater than TARGET_PAGE_SIZE
and that their address is a multiple of their length. Watchpoints
on ARM don't fit these restrictions, so change the implementation
so they can be relaxed.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Currently syscall instruction is buggy on user mode X86_64,
the EIP is updated after do_syscall(), that is too late for
clone(). Because clone() will create a thread at the env->EIP
(the address of syscall insn), and then child thread enters
do_syscall() again, that is not expected. Sometimes it is tragic.
User mode syscall insn emulation is not used MSR, so the
action should be same to INT 0x80. INT 0x80 will update EIP in
do_interrupt(), ditto for syscall() for consistency.
Signed-off-by: Jincheng Miao <jmiao@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
The default, 970fx, doesn't support MSR_LE. So even though we set LE in
ppc_cpu_reset, it gets cleared again in hreg_store_msr. Error out if a
user-selected cpu model doesn't support LE.
Signed-off-by: Richard Henderson <rth@twiddle.net>
[agraf: switch to POWER7 as default for BE and LE]
Signed-off-by: Alexander Graf <agraf@suse.de>
OABI arm used a software interrupt(0xef9f0001) for breakpoints.
Since 2005 gdb has used the break instruction(0xe7f001f0) for EABI.
Apparently Steel Bank Common Lisp still uses the swi instruction.
This is the kernel implementation:
http://lxr.free-electrons.com/source/arch/arm/kernel/traps.c#L598
Signed-off-by: Hunter Laux <hunterlaux@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
As a "utility", it only supported ppc, and in a way that other
tcg backends provided directly in tcg-target.h. Removing this
disparity is easier now that the two ppc backends are merged.
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Instead of getting backup auxv data from the env pointer given to main,
read it from /proc/self/auxv. We can do this at any time, so we're not
tied to any ordering wrt a call to qemu_init_auxval from main.
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The size and register information are encoded into the reserve_info field
of CPU state in the store conditional translation code. Specifically, the
size is shifted left by 5 bits (see target-ppc/translate.c gen_conditional_store).
The user-mode store conditional code erroneously extracts the size by ANDing
with a 4 bit mask; this breaks if size >= 16.
Eliminate the mask to make the extraction of size mirror its encoding.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
This allows running PPC64 little-endian in user mode if target is configured
that way. In PPC64 LE user mode we set MSR.LE during initialization.
Signed-off-by: Doug Kwan <dougkwan@google.com>
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
With the next patch, these need to be correct or VM86 tasks
have the wrong CPL. The flags are basically what the Intel VMX
documentation say is mandatory for entry into a VM86 guest.
For consistency, SMM ought to have the same flags except with
CPL=0.
Tested-by: Kevin O'Connor <kevin@koconnor.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Instead of manually calling cpu_x86_set_cpl() when the CPL changes,
check for CPL changes on calls to cpu_x86_load_seg_cache(R_CS). Every
location that called cpu_x86_set_cpl() also called
cpu_x86_load_seg_cache(R_CS), so cpu_x86_set_cpl() is no longer
required.
This fixes the SMM handler code as it was not setting/restoring the
CPL level manually.
Signed-off-by: Kevin O'Connor <kevin@koconnor.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
--enable-uname-release was a rather heavyweight hammer, as it allows
providing values less that UNAME_MINIMUM_RELEASE. Also, it affects
all built linux-user targets, which in most cases is not what user
wants.
Now that we have UNAME_MINIMUM_RELEASE for all linux-user platforms,
we can drop --enable-uname-release and the related CONFIG_UNAME_RELEASE
define.
Users can still override the variable with QEMU_UNAME=2.6.32 or -r
command line option. If distributors need to update a minimum version
for a specific target, it can be done by updating UNAME_MINIMUM_RELEASE.
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
For AArch32 exceptions, the only information provided about
the cause of an exception is the individual exception type (data
abort, undef, etc), which we store in cs->exception_index. For
AArch64, the CPU provides much more detail about the cause of
the exception, which can be found in the syndrome register.
Create a set of fields in CPUARMState which must be filled in
whenever an exception is raised, so that exception entry can
correctly fill in the syndrome register for the guest.
This includes the information which in AArch32 appears in
the DFAR and IFAR (fault address registers) and the DFSR
and IFSR (fault status registers) for data aborts and
prefetch aborts, since if we end up taking the MMU fault
to AArch64 rather than AArch32 this will need to end up
in different system registers.
This patch does a refactoring which moves the setting of the
AArch32 DFAR/DFSR/IFAR/IFSR from the point where the exception
is raised to the point where it is taken. (This is no change
for cores with an MMU, retains the existing clearly incorrect
behaviour for ARM946 of trashing the MP access permissions
registers which share the c5_data and c5_insn state fields,
and has no effect for v7M because we don't implement its
MPU fault status or address registers.)
As a side effect of the cleanup we fix a bug in the AArch64
linux-user mode code where we were passing a 64 bit fault
address through the 32 bit c6_data/c6_insn fields: it now
goes via the always-64-bit exception.vaddress.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Most targets were using offsetof(CPUFooState, breakpoints) to determine
how much of CPUFooState to clear on reset. Use the next field after
CPU_COMMON instead, if any, or sizeof(CPUFooState) otherwise.
Signed-off-by: Andreas Färber <afaerber@suse.de>
* remotes/riku/linux-user-for-upstream:
linux-user: set minimum kernel version to 2.6.32
linux-user: correct handling of break exception for MIPS
linux-user: translate signal number on return from sigtimedwait
linux-user: Implement sendmmsg syscall
linux-user: Fix getresuid, getresgid if !USE_UID16
linux-user: Don't use UID16 on AArch64
linux-user: AArch64: Implement SA_RESTORER for signal handlers
linux-user/signal.c: Fix AArch64 big-endian FP register restore
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This patch adds the Store Quadword Conditionl (stqcx.) instruction
which is introduced in Power ISA 2.07.
Signed-off-by: Tom Musta <tommusta@gmail.com>
[agraf: fix compile error when !TARGET_PPC64]
Signed-off-by: Alexander Graf <agraf@suse.de>
Exception with break instruction has not been correctly propagated as
SIGTRAP. This resolves crash issues with examples that use break
instruction on MIPS.
Signed-off-by: Petar Jovanovic <petar.jovanovic@imgtec.com>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
Signed-off-by: Janne Grunau <j@jannau.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This implement exclusive loads/stores for aarch64 along the lines of
arm32 and ppc implementations. The exclusive load remembers the address
and loaded value. The exclusive store throws an an exception which uses
those values to check for equality in a proper exclusive region.
This is not actually the architecture mandated semantics (for either
AArch32 or AArch64) but it is close enough for typical guest code
sequences to work correctly, and saves us from having to monitor all
guest stores. It's fairly easy to come up with test cases where we
don't behave like hardware - we don't for example model cache line
behaviour. However in the common patterns this works, and the existing
32 bit ARM exclusive access implementation has the same limitations.
AArch64 also implements new acquire/release loads/stores (which may be
either exclusive or non-exclusive). These imposes extra ordering
constraints on memory operations (ie they act as if they have an implicit
barrier built into them). As TCG is single-threaded all our barriers
are no-ops, so these just behave like normal loads and stores.
Signed-off-by: Michael Matz <matz@suse.de>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
In preparation for adding support for A64 load/store exclusive instructions,
widen the fields in the CPU state struct that deal with address and data values
for exclusives from 32 to 64 bits. Although in practice AArch64 and AArch32
exclusive accesses will be generally separate there are some odd theoretical
corner cases (eg you should be able to do the exclusive load in AArch32, take
an exception to AArch64 and successfully do the store exclusive there), and it's
also easier to reason about.
The changes in semantics for the variables are:
exclusive_addr -> extended to 64 bits; -1ULL for "monitor lost",
otherwise always < 2^32 for AArch32
exclusive_val -> extended to 64 bits. 64 bit exclusives in AArch32 now
use the high half of exclusive_val instead of a separate exclusive_high
exclusive_high -> is no longer used in AArch32; extended to 64 bits as
it will be needed for AArch64's pair-of-64-bit-values exclusives.
exclusive_test -> extended to 64 bits, as it is an address. Since this is
a linux-user-only field, in arm-linux-user it will always have the top
32 bits zero.
exclusive_info -> stays 32 bits, as it is neither data nor address, but
simply holds register indexes etc. AArch64 will be able to fit all its
information into 32 bits as well.
Note that the refactoring of gen_store_exclusive() coincidentally fixes
a minor bug where ldrexd would incorrectly update the first CPU register
even if the load for the second register faulted.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
The common pattern for system registers in a 64-bit capable ARM
CPU is that when in AArch32 the cp15 register is a view of the
bottom 32 bits of the 64-bit AArch64 system register; writes in
AArch32 leave the top half unchanged. The most natural way to
model this is to have the state field in the CPU struct be a
64 bit value, and simply have the AArch32 TCG code operate on
a pointer to its lower half.
For aarch64-linux-user the only registers we need to share like
this are the thread-local-storage ones. Widen their fields to
64 bits and provide the 64 bit reginfo struct to make them
visible in AArch64 state. Note that minor cleanup of the AArch64
system register encoding space means We can share the TPIDR_EL1
reginfo but need split encodings for TPIDR_EL0 and TPIDRRO_EL0.
Since we're touching almost every line in QEMU that uses the
c13_tls* fields in this patch anyway, we take the opportunity
to rename them in line with the standard ARM architectural names
for these registers.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
With this we no longer pass down envp, and thus all systems can have
the same void prototype. So also eliminate a useless thunk.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Since this is only read in cpu_copy() and linux-user has a global
cpu_model, drop the field from generic code.
Signed-off-by: Andreas Färber <afaerber@suse.de>
It is only used there and is deemed very fragile if not incorrect in its
current memcpy() form. Moving it into linux-user will allow to move
parts into target_cpu.h headers and only copy what the ABI mandates.
Signed-off-by: Andreas Färber <afaerber@suse.de>
microMIPS instructions that cause breakpoint exceptions come in
16-bit and 32-bit variants. When handling exceptions caused by
such instructions, the instruction type needs to be taken into
account when extracting the break code.
The code has also been restructured for better clarity.
Signed-off-by: Kwok Cheung Yeung <kcy@codesourcery.com>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
The binfmt_misc module can calculate the credentials and security
token according to the binary instead of to the interpreter if the
'C' flag is enabled.
To be able to execute non-readable binaries, this flag implies 'O'
flag. When 'O' flag is enabled, bintfmt_misc opens the file for
reading and pass the file descriptor to the interpreter.
References:
linux/Documentation/binfmt_misc.txt ['O' and 'C' description]
linux/fs/binfmt_misc.c linux/fs/binfmt_elf.c [ AT_EXECFD usage ]
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
The name field of MIPS_SYS isn't actually used; it's just documentation.
But adjust the umount entries to match mips/syscall_nr.h anyway.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
This patch adds support for AArch64 in all the small corners of
linux-user (primarily in image loading and startup code).
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: John Rigby <john.rigby@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1378235544-22290-22-git-send-email-peter.maydell@linaro.org
Message-id: 1368505980-17151-11-git-send-email-john.rigby@linaro.org
[PMM:
* removed some unnecessary #defines from syscall.h
* catch attempts to use a 32 bit only cpu with aarch64-linux-user
* termios stuff moved into its own patch
* we specify our minimum uname version here now
]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>