We are setting SRR0 to the instruction before the one causing the
unaligned exception. A quick testcase:
. = 0x100
.globl _start
_start:
/* Cause a 0x600 */
li 3,0x1
stwcx. 3,0,3
1: b 1b
. = 0x600
1: b 1b
Built into something we can load as a BIOS image:
gcc -mbig -c test.S
ld -EB -Ttext 0x0 -o test test.o
objcopy -O binary test test.bin
Run with:
qemu-system-ppc64 -nographic -bios test.bin
Shows an incorrect SRR0 (points at the li):
SRR0 0000000000000100
With the patch we get the correct SRR0:
SRR0 0000000000000104
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Add main working flow feature, system call processing feature, and elf64
tilegx binary loading feature, based on Linux kernel tilegx 64-bit
implementation.
[rth: Moved all of the implementation of atomic instructions to a later patch.]
Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <BLU436-SMTP938552D42808AA60634582B9660@phx.gbl>
Signed-off-by: Richard Henderson <rth@twiddle.net>
* qemu_mutex_lock_iothread "No such process" fix
* cutils: qemu_strto* wrappers
* iohandler.c simplification
* Many other fixes and misc patches.
And some MTTCG work (with Emilio's fixes squashed):
* Signal-free TCG kick
* Removing spinlock in favor of QemuMutex
* User-mode emulation multi-threading fixes/docs
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQEcBAABCAAGBQJV8Tk7AAoJEL/70l94x66Ds3QH/3bi0RRR2NtKIXAQrGo5tfuD
NPMu1K5Hy+/26AC6mEVNRh4kh7dPH5E4NnDGbxet1+osvmpjxAjc2JrxEybhHD0j
fkpzqynuBN6cA2Gu5GUNoKzxxTmi2RrEYigWDZqCftRXBeO2Hsr1etxJh9UoZw5H
dgpU3j/n0Q8s08jUJ1o789knZI/ckwL4oXK4u2KhSC7ZTCWhJT7Qr7c0JmiKReaF
JEYAsKkQhICVKRVmC8NxML8U58O8maBjQ62UN6nQpVaQd0Yo/6cstFTZsRrHMHL3
7A2Tyg862cMvp+1DOX3Bk02yXA+nxnzLF8kUe0rYo6llqDBDStzqyn1j9R0qeqA=
=nB06
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
* Support for jemalloc
* qemu_mutex_lock_iothread "No such process" fix
* cutils: qemu_strto* wrappers
* iohandler.c simplification
* Many other fixes and misc patches.
And some MTTCG work (with Emilio's fixes squashed):
* Signal-free TCG kick
* Removing spinlock in favor of QemuMutex
* User-mode emulation multi-threading fixes/docs
# gpg: Signature made Thu 10 Sep 2015 09:03:07 BST using RSA key ID 78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>"
* remotes/bonzini/tags/for-upstream: (44 commits)
cutils: work around platform differences in strto{l,ul,ll,ull}
cpu-exec: fix lock hierarchy for user-mode emulation
exec: make mmap_lock/mmap_unlock globally available
tcg: comment on which functions have to be called with mmap_lock held
tcg: add memory barriers in page_find_alloc accesses
remove unused spinlock.
replace spinlock by QemuMutex.
cpus: remove tcg_halt_cond and tcg_cpu_thread globals
cpus: protect work list with work_mutex
scripts/dump-guest-memory.py: fix after RAMBlock change
configure: Add support for jemalloc
add macro file for coccinelle
configure: factor out adding disas configure
vhost-scsi: fix wrong vhost-scsi firmware path
checkpatch: remove tests that are not relevant outside the kernel
checkpatch: adapt some tests to QEMU
CODING_STYLE: update mixed declaration rules
qmp: Add example usage of strto*l() qemu wrapper
cutils: Add qemu_strtoull() wrapper
cutils: Add qemu_strtoll() wrapper
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Warnings from the Sparse static analysis tool:
linux-user/main.c:40:12: warning:
symbol 'filename' was not declared. Should it be static?
linux-user/main.c:41:12: warning:
symbol 'argv0' was not declared. Should it be static?
linux-user/main.c:42:5: warning:
symbol 'gdbstub_port' was not declared. Should it be static?
linux-user/main.c:43:11: warning:
symbol 'envlist' was not declared. Should it be static?
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
spinlock is only used in two cases:
* cpu-exec.c: to protect TranslationBlock
* mem_helper.c: for lock helper in target-i386 (which seems broken).
It's a pthread_mutex_t in user-mode, so we can use QemuMutex directly,
with an #ifdef. The #ifdef will be removed when multithreaded TCG
will need the mutex as well.
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Message-Id: <1439220437-23957-5-git-send-email-fred.konrad@greensocs.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
[Merge Emilio G. Cota's patch to remove volatile. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
For the A64 instruction set, the semihosting call instruction
is 'HLT 0xf000'. Wire this up to call do_arm_semihosting()
if semihosting is enabled.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Christopher Covington <christopher.covington@linaro.org>
Tested-by: Christopher Covington <cov@codeaurora.org>
Message-id: 1439483745-28752-10-git-send-email-peter.maydell@linaro.org
All tcg host architectures now support the guest base and as
there is no real performance lost, it can be always enabled.
Anyway, guest base use can be disabled lively by setting guest
base to 0.
CONFIG_USE_GUEST_BASE is defined as (USE_GUEST_BASE && USER_ONLY),
it should have to be replaced by CONFIG_USER_ONLY in non CONFIG_USER_ONLY
parts, but as some other parts are using !CONFIG_SOFTMMU I have chosen to
use !CONFIG_SOFTMMU instead.
Reviewed-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <1440373328-9788-2-git-send-email-laurent@vivier.eu>
Signed-off-by: Richard Henderson <rth@twiddle.net>
For the MIPS N64 ABI when QEMU reads the break/trap instruction so that
it can inspect the break/trap code it reads 8 rather than 4 bytes
which means it finds the code field from the instruction after the
break/trap instruction. This then causes the break/trap handling
code to fail because it does not understand the code number.
The fix forces QEMU to always read 4 bytes of instruction data rather
than deciding how much to read based on the ABI.
Signed-off-by: Andrew Bennett <andrew.bennett@imgtec.com>
Reviewed-by: Leon Alrae <leon.alrae@imgtec.com>
Signed-off-by: Leon Alrae <leon.alrae@imgtec.com>
Remove un-needed usages of ENV_GET_CPU() by converting the APIs to use
CPUState pointers and retrieving the env_ptr as minimally needed.
Scripted conversion for target-* change:
for I in target-*/cpu.h; do
sed -i \
's/\(^int cpu_[^_]*_exec(\)[^ ][^ ]* \*s);$/\1CPUState *cpu);/' \
$I;
done
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
All callsites to this function navigate the cpu->env_ptr only for the
function to take the env ptr back to the original cpu ptr. Change the
function to just pass in the CPU pointer instead. Removes a core code
usage of ENV_GET_CPU() (in gdbstub.c).
Cc: Riku Voipio <riku.voipio@iki.fi>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
In qemu-linux-user, when calling gethostbyname2(),
it was hanging in .__res_nmkquery.
(gdb) bt
0 in .__res_nmkquery () from /lib64/libresolv.so.2
1 in .__libc_res_nquery () from /lib64/libresolv.so.2
2 in .__libc_res_nsearch () from /lib64/libresolv.so.2
3 in ._nss_dns_gethostbyname3_r () from /lib64/libnss_dns.so.2
4 in ._nss_dns_gethostbyname2_r () from /lib64/libnss_dns.so.2
5 in .gethostbyname2_r () from /lib64/libc.so.6
6 in .gethostbyname2 () from /lib64/libc.so.6
.__res_nmkquery() is:
...
do { RANDOM_BITS (randombits); } while ((randombits & 0xffff) == 0);
...
<.__res_nmkquery+112>: mftbl r11
<.__res_nmkquery+116>: clrlwi r10,r11,16
<.__res_nmkquery+120>: cmpwi cr7,r10,0
<.__res_nmkquery+124>: beq cr7,<.__res_nmkquery+112>
but as mftbl (Move From Time Base Lower) is not implemented,
r11 is always 0, so we have an infinite loop.
This patch fills the Time Base register with cpu_get_real_ticks().
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Alexander Graf <agraf@suse.de>
When a thread is spawned, cpu_copy re-initializes
the bp & wp lists of current thread, instead of the ones
of the new thread.
The effect is that breakpoints are no longer hit.
Signed-off-by: Thierry Bultel <thierry.bultel@basystemes.fr>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
Remove cpu_smm_register and cpu_smm_update. Instead, each CPU
address space gets an extra region which is an alias of
/machine/smram. This extra region is enabled or disabled
as the CPU enters/exits SMM.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The function is a nop for user mode, so just remove them.
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-Id: <1426496617-10702-3-git-send-email-famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
New threads always point at the same env which is incorrect and usually
leads to a crash.
Signed-off-by: Leon Alrae <leon.alrae@imgtec.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
start/end_exclusive() need be pairs, except the start_exclusive() in
stop_all_tasks() which is only used by force_sig(), which will be abort.
So at present, start_exclusive() in stop_all_task() need not be paired.
queue_signal() may call force_sig(), or return after kill pid (or queue
signal). If could return from queue_signal(), stop_all_task() would not
be called in time, the next end_exclusive() would be issue.
So in arm_kernel_cmpxchg64_helper() for ARM, need remove end_exclusive()
after queue_signal(). The related commit: "97cc756 linux-user: Implement
new ARM 64 bit cmpxchg kernel helper".
Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
When support was added for TrustZone to ARM CPU emulation, we failed
to correctly update the support for the linux-user implementation of
the get/set_tls syscalls. This meant that accesses to the TPIDRURO
register via the syscalls were always using the non-secure copy of
the register even if native MRC/MCR accesses were using the secure
register. This inconsistency caused most binaries to segfault on startup
if the CPU type was explicitly set to one of the TZ-enabled ones like
cortex-a15. (The default "any" CPU doesn't have TZ enabled and so is
not affected.)
Use access_secure_reg() to determine whether we should be using
the secure or the nonsecure copy of TPIDRURO when emulating these
syscalls.
Signed-off-by: Mikhail Ilyin <m.ilin@samsung.com>
Message-id: 1426505198-2411-1-git-send-email-m.ilin@samsung.com
[PMM: rewrote commit message to more clearly explain the issue
and its consequences.]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This reverts commit b8a173b25c, reversing
changes made to 5de090464f.
(I applied this pull request when I should not have done so, and
am now immediately reverting it.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This was the only caller of cpu_init() that was not checking for NULL
yet.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
In main.c, all SIG* should be TARGET_SIG*, since the relevant functions
(queue_signal() and gdb_handlesig()) expect TARGET_SIG*.
The corresponding vi command is "1,$ s/\<SIG/TARGET_SIG/g".
Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
The function end_exclusive() isn't used on all targets; mark it as
such to avoid a clang warning.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
The start_exclusive() infrastructure is used on all target
architectures, even if only to do the "stop all CPUs before
dumping core" in force_sig(), so be consistent and call
cpu_exec_start/end in the main loop of every target.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
In the m68k cpu_loop() use get_user_u16 to read the immediate for
the simcall rahter than lduw, to bring it into line with how other
archs do it and to remove another user of the ldl family of functions.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1421334118-3287-8-git-send-email-peter.maydell@linaro.org
The TARGET_HAS_ICE #define is intended to indicate whether a target-*
guest CPU implementation supports the breakpoint handling. However,
all our guest CPUs have that support (the only two which do not
define TARGET_HAS_ICE are unicore32 and openrisc, and in both those
cases the bp support is present and the lack of the #define is just
a bug). So remove the #define entirely: all new guest CPU support
should include breakpoint handling as part of the basic implementation.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1420484960-32365-1-git-send-email-peter.maydell@linaro.org
Replace the 20Kc original MIPS64 ISA processor used for 64-bit user
emulation with the 5KEf processor that implements the MIPS64r2 ISA,
complementing the choice of the 24Kf processor for 32-bit emulation.
Signed-off-by: Maciej W. Rozycki <macro@codesourcery.com>
Reviewed-by: Leon Alrae <leon.alrae@imgtec.com>
Signed-off-by: Leon Alrae <leon.alrae@imgtec.com>
When EL3 is running in AArch32 (or ARMv7 with Security Extensions)
FCSEIDR, CONTEXTIDR, TPIDRURW, TPIDRURO and TPIDRPRW have a secure
and a non-secure instance.
Signed-off-by: Fabian Aggeler <aggelerf@ethz.ch>
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1416242878-876-25-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
On AArch64 the si_addr field of siginfo_t is truncated to 32 bits
because the fault address passes through an uint32_t variable.
Follow Peters suggestion and drop the uint32_t variable
since its only used once in the Aarch64 loop.
Reported-by: Amanieu d'Antras <amanieu@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
This patch introduces the -seed command line option and the
QEMU_RAND_SEED environment variable for setting the random seed, which
is used for the AT_RANDOM ELF aux entry.
Signed-off-by: Magnus Reftel <reftel@spotify.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
The current implementation of watchpoints requires that they
have a power of 2 length which is not greater than TARGET_PAGE_SIZE
and that their address is a multiple of their length. Watchpoints
on ARM don't fit these restrictions, so change the implementation
so they can be relaxed.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Currently syscall instruction is buggy on user mode X86_64,
the EIP is updated after do_syscall(), that is too late for
clone(). Because clone() will create a thread at the env->EIP
(the address of syscall insn), and then child thread enters
do_syscall() again, that is not expected. Sometimes it is tragic.
User mode syscall insn emulation is not used MSR, so the
action should be same to INT 0x80. INT 0x80 will update EIP in
do_interrupt(), ditto for syscall() for consistency.
Signed-off-by: Jincheng Miao <jmiao@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
The default, 970fx, doesn't support MSR_LE. So even though we set LE in
ppc_cpu_reset, it gets cleared again in hreg_store_msr. Error out if a
user-selected cpu model doesn't support LE.
Signed-off-by: Richard Henderson <rth@twiddle.net>
[agraf: switch to POWER7 as default for BE and LE]
Signed-off-by: Alexander Graf <agraf@suse.de>
OABI arm used a software interrupt(0xef9f0001) for breakpoints.
Since 2005 gdb has used the break instruction(0xe7f001f0) for EABI.
Apparently Steel Bank Common Lisp still uses the swi instruction.
This is the kernel implementation:
http://lxr.free-electrons.com/source/arch/arm/kernel/traps.c#L598
Signed-off-by: Hunter Laux <hunterlaux@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
As a "utility", it only supported ppc, and in a way that other
tcg backends provided directly in tcg-target.h. Removing this
disparity is easier now that the two ppc backends are merged.
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Instead of getting backup auxv data from the env pointer given to main,
read it from /proc/self/auxv. We can do this at any time, so we're not
tied to any ordering wrt a call to qemu_init_auxval from main.
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The size and register information are encoded into the reserve_info field
of CPU state in the store conditional translation code. Specifically, the
size is shifted left by 5 bits (see target-ppc/translate.c gen_conditional_store).
The user-mode store conditional code erroneously extracts the size by ANDing
with a 4 bit mask; this breaks if size >= 16.
Eliminate the mask to make the extraction of size mirror its encoding.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
This allows running PPC64 little-endian in user mode if target is configured
that way. In PPC64 LE user mode we set MSR.LE during initialization.
Signed-off-by: Doug Kwan <dougkwan@google.com>
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
With the next patch, these need to be correct or VM86 tasks
have the wrong CPL. The flags are basically what the Intel VMX
documentation say is mandatory for entry into a VM86 guest.
For consistency, SMM ought to have the same flags except with
CPL=0.
Tested-by: Kevin O'Connor <kevin@koconnor.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Instead of manually calling cpu_x86_set_cpl() when the CPL changes,
check for CPL changes on calls to cpu_x86_load_seg_cache(R_CS). Every
location that called cpu_x86_set_cpl() also called
cpu_x86_load_seg_cache(R_CS), so cpu_x86_set_cpl() is no longer
required.
This fixes the SMM handler code as it was not setting/restoring the
CPL level manually.
Signed-off-by: Kevin O'Connor <kevin@koconnor.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
--enable-uname-release was a rather heavyweight hammer, as it allows
providing values less that UNAME_MINIMUM_RELEASE. Also, it affects
all built linux-user targets, which in most cases is not what user
wants.
Now that we have UNAME_MINIMUM_RELEASE for all linux-user platforms,
we can drop --enable-uname-release and the related CONFIG_UNAME_RELEASE
define.
Users can still override the variable with QEMU_UNAME=2.6.32 or -r
command line option. If distributors need to update a minimum version
for a specific target, it can be done by updating UNAME_MINIMUM_RELEASE.
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
For AArch32 exceptions, the only information provided about
the cause of an exception is the individual exception type (data
abort, undef, etc), which we store in cs->exception_index. For
AArch64, the CPU provides much more detail about the cause of
the exception, which can be found in the syndrome register.
Create a set of fields in CPUARMState which must be filled in
whenever an exception is raised, so that exception entry can
correctly fill in the syndrome register for the guest.
This includes the information which in AArch32 appears in
the DFAR and IFAR (fault address registers) and the DFSR
and IFSR (fault status registers) for data aborts and
prefetch aborts, since if we end up taking the MMU fault
to AArch64 rather than AArch32 this will need to end up
in different system registers.
This patch does a refactoring which moves the setting of the
AArch32 DFAR/DFSR/IFAR/IFSR from the point where the exception
is raised to the point where it is taken. (This is no change
for cores with an MMU, retains the existing clearly incorrect
behaviour for ARM946 of trashing the MP access permissions
registers which share the c5_data and c5_insn state fields,
and has no effect for v7M because we don't implement its
MPU fault status or address registers.)
As a side effect of the cleanup we fix a bug in the AArch64
linux-user mode code where we were passing a 64 bit fault
address through the 32 bit c6_data/c6_insn fields: it now
goes via the always-64-bit exception.vaddress.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Most targets were using offsetof(CPUFooState, breakpoints) to determine
how much of CPUFooState to clear on reset. Use the next field after
CPU_COMMON instead, if any, or sizeof(CPUFooState) otherwise.
Signed-off-by: Andreas Färber <afaerber@suse.de>