Update the SPARC main loop and sigreturn code:
* on TARGET_ERESTARTSYS, wind guest PC backwards to repeat syscall insn
* set all guest CPU state within signal.c code on sigreturn
* handle TARGET_QEMU_ESIGRETURN in the main loop as the indication
that the main loop should not touch any guest CPU state
Signed-off-by: Timothy Edward Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
Message-id: 1441497448-32489-9-git-send-email-T.E.Baldwin99@members.leeds.ac.uk
[PMM: Commit message tweaks; drop TARGET_USE_ERESTARTSYS define]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
Update the 32-bit and 64-bit ARM main loop and sigreturn code:
* on TARGET_ERESTARTSYS, wind guest PC backwards to repeat syscall insn
* set all guest CPU state within signal.c code on sigreturn
* handle TARGET_QEMU_ESIGRETURN in the main loop as the indication
that the main loop should not touch any guest CPU state
Signed-off-by: Timothy Edward Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
Message-id: 1441497448-32489-6-git-send-email-T.E.Baldwin99@members.leeds.ac.uk
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: tweak commit message; drop TARGET_USE_ERESTARTSYS define]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
Update the x86 main loop and sigreturn code:
* on TARGET_ERESTARTSYS, wind guest PC backwards to repeat syscall insn
* set all guest CPU state within signal.c code rather than passing it
back out as the "return code" from do_sigreturn()
* handle TARGET_QEMU_ESIGRETURN in the main loop as the indication
that the main loop should not touch EAX
Signed-off-by: Timothy Edward Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
Message-id: 1441497448-32489-5-git-send-email-T.E.Baldwin99@members.leeds.ac.uk
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: Commit message tweaks; drop TARGET_USE_ERESTARTSYS define]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
x86_cpudef_init() doesn't do anything anymore, cpudef_init(),
cpudef_setup(), and x86_cpudef_init() can be finally removed.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
exec-all.h contains TCG-specific definitions. It is not needed outside
TCG-specific files such as translate.c, exec.c or *helper.c.
One generic function had snuck into include/exec/exec-all.h; move it to
include/qom/cpu.h.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This decouples logging further from config-target.h
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The new-in-ARMv8 YIELD instruction has been implemented to throw
an EXCP_YIELD back up to the QEMU main loop. In system emulation
we use this to decide to schedule a different guest CPU in SMP
configurations. In usermode emulation there is nothing to do,
so just ignore it and resume the guest.
This prevents an abort with "unhandled CPU exception 0x10004"
if the guest process uses the YIELD instruction.
Reported-by: Hunter Laux <hunterlaux@gmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1456833171-31900-1-git-send-email-peter.maydell@linaro.org
Move declarations out of qemu-common.h for functions declared in
utils/ files: e.g. include/qemu/path.h for utils/path.c.
Move inline functions out of qemu-common.h and into new files (e.g.
include/qemu/bcd.h)
Signed-off-by: Veronia Bahaa <veroniabahaa@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Now that CPSR.E is set correctly, prepare for when setend will be able
to change it; bswap data in and out of strex manually by comparing
SCTLR.B, CPSR.E and TARGET_WORDS_BIGENDIAN (we do not have the luxury
of using TCGMemOps).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
[ PC changes:
* Moved SCTLR/CPSR logic to arm_cpu_data_is_big_endian
]
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
If doing big-endian linux-user mode, set both the CPSR.E and SCTLR.E0E
bits. This sets big-endian mode for data accesses.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
bswap_code is a CPU property of sorts ("is the iside endianness the
opposite way round to TARGET_WORDS_BIGENDIAN?") but it is not the
actual CPU state involved here which is SCTLR.B (set for BE32
binaries, clear for BE8).
Replace bswap_code with SCTLR.B, and pass that to arm_ld*_code.
The next patches will make data fetches honor both SCTLR.B and
CPSR.E appropriately.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
[PC changes:
* rebased on master (Jan 2016)
* s/TARGET_USER_ONLY/CONFIG_USER_ONLY
* Use bswap_code() for disas_set_info() instead of raw sctlr_b
]
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This matches the idiom used by get_user_data_* later in the series,
and will help when bswap_code will be replaced by SCTLR.B.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When linux-user code is calling cpsr_write(), use a restrictive
mask to ensure we are limiting the set of CPSR bits we update.
In particular, don't allow the mode bits to be changed.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1455556977-3644-5-git-send-email-peter.maydell@linaro.org
Add an argument to cpsr_write() to indicate what kind of CPSR
write is being requested, since the exact behaviour should
differ for the different cases.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1455556977-3644-3-git-send-email-peter.maydell@linaro.org
Set the default to the latest CPU version to have the
largest set of available features.
It is also really needed in little-endian mode because
POWER7 is not really supported in this mode and some distros
(at least debian) generate POWER8 code for their ppc64le target.
Fixes: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=813698
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
Split the bits that require it to exec/log.h.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-id: 1452174932-28657-8-git-send-email-den@openvz.org
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.
This commit was created with scripts/clean-includes.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1453832250-766-10-git-send-email-peter.maydell@linaro.org
Ensure that all log writes are protected by qemu_loglevel_mask or,
in serious cases, go to both the log and stderr.
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In some cases, the same message is printed both on stderr and in the log.
Avoid duplicate output in the default case where stderr _is_ the log,
and standardize this to stderr+log where it used to use stdio+log.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
g_new(T, n) is neater than g_malloc(sizeof(T) * n). It's also safer,
for two reasons. One, it catches multiplication overflowing size_t.
Two, it returns T * rather than void *, which lets the compiler catch
more type errors.
This commit only touches allocations with size arguments of the form
sizeof(T). Same Coccinelle semantic patch as in commit b45c03f.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
This should help clarify the purpose of the function that returns
the host system's CPU cycle count.
Signed-off-by: Christopher Covington <cov@codeaurora.org>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
ppc portion
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Notice raise and bpt, decoding the constants embedded in the
nop addil instruction in the x0 slot.
[rth: Generalize TILEGX_EXCP_OPCODE_ILL to TILEGX_EXCP_SIGNAL.
Drop validation of signal values.]
Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
Message-Id: <1443243635-4886-1-git-send-email-gang.chen.5i5j@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The system mode binaries provide a similar alias
and it makes common options like --version and --help
work as expected.
Signed-off-by: Meador Inge <meadori@codesourcery.com>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
As suggested by Laurent, use EXIT_SUCCESS and EXIT_FAILURE from
stdlib.h instead of numeric values.
Cc: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
This patch adds better support for diagnosing option
parser errors. The previous implementation just printed
the usage text and exited when a bad option or argument
was found. This made it very difficult to determine why
the usage was being displayed and it was doubly confusing
for cases like '--help' (it wasn't clear that --help was
actually an error).
Signed-off-by: Meador Inge <meadori@codesourcery.com>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
This option is already available on the system mode
binaries. It would be better if long options were
supported (i.e. --help), but this is okay for now.
Signed-off-by: Meador Inge <meadori@codesourcery.com>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
We are setting SRR0 to the instruction before the one causing the
unaligned exception. A quick testcase:
. = 0x100
.globl _start
_start:
/* Cause a 0x600 */
li 3,0x1
stwcx. 3,0,3
1: b 1b
. = 0x600
1: b 1b
Built into something we can load as a BIOS image:
gcc -mbig -c test.S
ld -EB -Ttext 0x0 -o test test.o
objcopy -O binary test test.bin
Run with:
qemu-system-ppc64 -nographic -bios test.bin
Shows an incorrect SRR0 (points at the li):
SRR0 0000000000000100
With the patch we get the correct SRR0:
SRR0 0000000000000104
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Add main working flow feature, system call processing feature, and elf64
tilegx binary loading feature, based on Linux kernel tilegx 64-bit
implementation.
[rth: Moved all of the implementation of atomic instructions to a later patch.]
Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <BLU436-SMTP938552D42808AA60634582B9660@phx.gbl>
Signed-off-by: Richard Henderson <rth@twiddle.net>
* qemu_mutex_lock_iothread "No such process" fix
* cutils: qemu_strto* wrappers
* iohandler.c simplification
* Many other fixes and misc patches.
And some MTTCG work (with Emilio's fixes squashed):
* Signal-free TCG kick
* Removing spinlock in favor of QemuMutex
* User-mode emulation multi-threading fixes/docs
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQEcBAABCAAGBQJV8Tk7AAoJEL/70l94x66Ds3QH/3bi0RRR2NtKIXAQrGo5tfuD
NPMu1K5Hy+/26AC6mEVNRh4kh7dPH5E4NnDGbxet1+osvmpjxAjc2JrxEybhHD0j
fkpzqynuBN6cA2Gu5GUNoKzxxTmi2RrEYigWDZqCftRXBeO2Hsr1etxJh9UoZw5H
dgpU3j/n0Q8s08jUJ1o789knZI/ckwL4oXK4u2KhSC7ZTCWhJT7Qr7c0JmiKReaF
JEYAsKkQhICVKRVmC8NxML8U58O8maBjQ62UN6nQpVaQd0Yo/6cstFTZsRrHMHL3
7A2Tyg862cMvp+1DOX3Bk02yXA+nxnzLF8kUe0rYo6llqDBDStzqyn1j9R0qeqA=
=nB06
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
* Support for jemalloc
* qemu_mutex_lock_iothread "No such process" fix
* cutils: qemu_strto* wrappers
* iohandler.c simplification
* Many other fixes and misc patches.
And some MTTCG work (with Emilio's fixes squashed):
* Signal-free TCG kick
* Removing spinlock in favor of QemuMutex
* User-mode emulation multi-threading fixes/docs
# gpg: Signature made Thu 10 Sep 2015 09:03:07 BST using RSA key ID 78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>"
* remotes/bonzini/tags/for-upstream: (44 commits)
cutils: work around platform differences in strto{l,ul,ll,ull}
cpu-exec: fix lock hierarchy for user-mode emulation
exec: make mmap_lock/mmap_unlock globally available
tcg: comment on which functions have to be called with mmap_lock held
tcg: add memory barriers in page_find_alloc accesses
remove unused spinlock.
replace spinlock by QemuMutex.
cpus: remove tcg_halt_cond and tcg_cpu_thread globals
cpus: protect work list with work_mutex
scripts/dump-guest-memory.py: fix after RAMBlock change
configure: Add support for jemalloc
add macro file for coccinelle
configure: factor out adding disas configure
vhost-scsi: fix wrong vhost-scsi firmware path
checkpatch: remove tests that are not relevant outside the kernel
checkpatch: adapt some tests to QEMU
CODING_STYLE: update mixed declaration rules
qmp: Add example usage of strto*l() qemu wrapper
cutils: Add qemu_strtoull() wrapper
cutils: Add qemu_strtoll() wrapper
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Warnings from the Sparse static analysis tool:
linux-user/main.c:40:12: warning:
symbol 'filename' was not declared. Should it be static?
linux-user/main.c:41:12: warning:
symbol 'argv0' was not declared. Should it be static?
linux-user/main.c:42:5: warning:
symbol 'gdbstub_port' was not declared. Should it be static?
linux-user/main.c:43:11: warning:
symbol 'envlist' was not declared. Should it be static?
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
spinlock is only used in two cases:
* cpu-exec.c: to protect TranslationBlock
* mem_helper.c: for lock helper in target-i386 (which seems broken).
It's a pthread_mutex_t in user-mode, so we can use QemuMutex directly,
with an #ifdef. The #ifdef will be removed when multithreaded TCG
will need the mutex as well.
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Message-Id: <1439220437-23957-5-git-send-email-fred.konrad@greensocs.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
[Merge Emilio G. Cota's patch to remove volatile. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
For the A64 instruction set, the semihosting call instruction
is 'HLT 0xf000'. Wire this up to call do_arm_semihosting()
if semihosting is enabled.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Christopher Covington <christopher.covington@linaro.org>
Tested-by: Christopher Covington <cov@codeaurora.org>
Message-id: 1439483745-28752-10-git-send-email-peter.maydell@linaro.org
All tcg host architectures now support the guest base and as
there is no real performance lost, it can be always enabled.
Anyway, guest base use can be disabled lively by setting guest
base to 0.
CONFIG_USE_GUEST_BASE is defined as (USE_GUEST_BASE && USER_ONLY),
it should have to be replaced by CONFIG_USER_ONLY in non CONFIG_USER_ONLY
parts, but as some other parts are using !CONFIG_SOFTMMU I have chosen to
use !CONFIG_SOFTMMU instead.
Reviewed-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <1440373328-9788-2-git-send-email-laurent@vivier.eu>
Signed-off-by: Richard Henderson <rth@twiddle.net>
For the MIPS N64 ABI when QEMU reads the break/trap instruction so that
it can inspect the break/trap code it reads 8 rather than 4 bytes
which means it finds the code field from the instruction after the
break/trap instruction. This then causes the break/trap handling
code to fail because it does not understand the code number.
The fix forces QEMU to always read 4 bytes of instruction data rather
than deciding how much to read based on the ABI.
Signed-off-by: Andrew Bennett <andrew.bennett@imgtec.com>
Reviewed-by: Leon Alrae <leon.alrae@imgtec.com>
Signed-off-by: Leon Alrae <leon.alrae@imgtec.com>
Remove un-needed usages of ENV_GET_CPU() by converting the APIs to use
CPUState pointers and retrieving the env_ptr as minimally needed.
Scripted conversion for target-* change:
for I in target-*/cpu.h; do
sed -i \
's/\(^int cpu_[^_]*_exec(\)[^ ][^ ]* \*s);$/\1CPUState *cpu);/' \
$I;
done
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
All callsites to this function navigate the cpu->env_ptr only for the
function to take the env ptr back to the original cpu ptr. Change the
function to just pass in the CPU pointer instead. Removes a core code
usage of ENV_GET_CPU() (in gdbstub.c).
Cc: Riku Voipio <riku.voipio@iki.fi>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
In qemu-linux-user, when calling gethostbyname2(),
it was hanging in .__res_nmkquery.
(gdb) bt
0 in .__res_nmkquery () from /lib64/libresolv.so.2
1 in .__libc_res_nquery () from /lib64/libresolv.so.2
2 in .__libc_res_nsearch () from /lib64/libresolv.so.2
3 in ._nss_dns_gethostbyname3_r () from /lib64/libnss_dns.so.2
4 in ._nss_dns_gethostbyname2_r () from /lib64/libnss_dns.so.2
5 in .gethostbyname2_r () from /lib64/libc.so.6
6 in .gethostbyname2 () from /lib64/libc.so.6
.__res_nmkquery() is:
...
do { RANDOM_BITS (randombits); } while ((randombits & 0xffff) == 0);
...
<.__res_nmkquery+112>: mftbl r11
<.__res_nmkquery+116>: clrlwi r10,r11,16
<.__res_nmkquery+120>: cmpwi cr7,r10,0
<.__res_nmkquery+124>: beq cr7,<.__res_nmkquery+112>
but as mftbl (Move From Time Base Lower) is not implemented,
r11 is always 0, so we have an infinite loop.
This patch fills the Time Base register with cpu_get_real_ticks().
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Alexander Graf <agraf@suse.de>
When a thread is spawned, cpu_copy re-initializes
the bp & wp lists of current thread, instead of the ones
of the new thread.
The effect is that breakpoints are no longer hit.
Signed-off-by: Thierry Bultel <thierry.bultel@basystemes.fr>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
Remove cpu_smm_register and cpu_smm_update. Instead, each CPU
address space gets an extra region which is an alias of
/machine/smram. This extra region is enabled or disabled
as the CPU enters/exits SMM.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The function is a nop for user mode, so just remove them.
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-Id: <1426496617-10702-3-git-send-email-famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
New threads always point at the same env which is incorrect and usually
leads to a crash.
Signed-off-by: Leon Alrae <leon.alrae@imgtec.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
start/end_exclusive() need be pairs, except the start_exclusive() in
stop_all_tasks() which is only used by force_sig(), which will be abort.
So at present, start_exclusive() in stop_all_task() need not be paired.
queue_signal() may call force_sig(), or return after kill pid (or queue
signal). If could return from queue_signal(), stop_all_task() would not
be called in time, the next end_exclusive() would be issue.
So in arm_kernel_cmpxchg64_helper() for ARM, need remove end_exclusive()
after queue_signal(). The related commit: "97cc756 linux-user: Implement
new ARM 64 bit cmpxchg kernel helper".
Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
When support was added for TrustZone to ARM CPU emulation, we failed
to correctly update the support for the linux-user implementation of
the get/set_tls syscalls. This meant that accesses to the TPIDRURO
register via the syscalls were always using the non-secure copy of
the register even if native MRC/MCR accesses were using the secure
register. This inconsistency caused most binaries to segfault on startup
if the CPU type was explicitly set to one of the TZ-enabled ones like
cortex-a15. (The default "any" CPU doesn't have TZ enabled and so is
not affected.)
Use access_secure_reg() to determine whether we should be using
the secure or the nonsecure copy of TPIDRURO when emulating these
syscalls.
Signed-off-by: Mikhail Ilyin <m.ilin@samsung.com>
Message-id: 1426505198-2411-1-git-send-email-m.ilin@samsung.com
[PMM: rewrote commit message to more clearly explain the issue
and its consequences.]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This reverts commit b8a173b25c, reversing
changes made to 5de090464f.
(I applied this pull request when I should not have done so, and
am now immediately reverting it.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This was the only caller of cpu_init() that was not checking for NULL
yet.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
In main.c, all SIG* should be TARGET_SIG*, since the relevant functions
(queue_signal() and gdb_handlesig()) expect TARGET_SIG*.
The corresponding vi command is "1,$ s/\<SIG/TARGET_SIG/g".
Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
The function end_exclusive() isn't used on all targets; mark it as
such to avoid a clang warning.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
The start_exclusive() infrastructure is used on all target
architectures, even if only to do the "stop all CPUs before
dumping core" in force_sig(), so be consistent and call
cpu_exec_start/end in the main loop of every target.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
In the m68k cpu_loop() use get_user_u16 to read the immediate for
the simcall rahter than lduw, to bring it into line with how other
archs do it and to remove another user of the ldl family of functions.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1421334118-3287-8-git-send-email-peter.maydell@linaro.org
The TARGET_HAS_ICE #define is intended to indicate whether a target-*
guest CPU implementation supports the breakpoint handling. However,
all our guest CPUs have that support (the only two which do not
define TARGET_HAS_ICE are unicore32 and openrisc, and in both those
cases the bp support is present and the lack of the #define is just
a bug). So remove the #define entirely: all new guest CPU support
should include breakpoint handling as part of the basic implementation.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1420484960-32365-1-git-send-email-peter.maydell@linaro.org
Replace the 20Kc original MIPS64 ISA processor used for 64-bit user
emulation with the 5KEf processor that implements the MIPS64r2 ISA,
complementing the choice of the 24Kf processor for 32-bit emulation.
Signed-off-by: Maciej W. Rozycki <macro@codesourcery.com>
Reviewed-by: Leon Alrae <leon.alrae@imgtec.com>
Signed-off-by: Leon Alrae <leon.alrae@imgtec.com>
When EL3 is running in AArch32 (or ARMv7 with Security Extensions)
FCSEIDR, CONTEXTIDR, TPIDRURW, TPIDRURO and TPIDRPRW have a secure
and a non-secure instance.
Signed-off-by: Fabian Aggeler <aggelerf@ethz.ch>
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1416242878-876-25-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
On AArch64 the si_addr field of siginfo_t is truncated to 32 bits
because the fault address passes through an uint32_t variable.
Follow Peters suggestion and drop the uint32_t variable
since its only used once in the Aarch64 loop.
Reported-by: Amanieu d'Antras <amanieu@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
This patch introduces the -seed command line option and the
QEMU_RAND_SEED environment variable for setting the random seed, which
is used for the AT_RANDOM ELF aux entry.
Signed-off-by: Magnus Reftel <reftel@spotify.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
The current implementation of watchpoints requires that they
have a power of 2 length which is not greater than TARGET_PAGE_SIZE
and that their address is a multiple of their length. Watchpoints
on ARM don't fit these restrictions, so change the implementation
so they can be relaxed.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Currently syscall instruction is buggy on user mode X86_64,
the EIP is updated after do_syscall(), that is too late for
clone(). Because clone() will create a thread at the env->EIP
(the address of syscall insn), and then child thread enters
do_syscall() again, that is not expected. Sometimes it is tragic.
User mode syscall insn emulation is not used MSR, so the
action should be same to INT 0x80. INT 0x80 will update EIP in
do_interrupt(), ditto for syscall() for consistency.
Signed-off-by: Jincheng Miao <jmiao@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
The default, 970fx, doesn't support MSR_LE. So even though we set LE in
ppc_cpu_reset, it gets cleared again in hreg_store_msr. Error out if a
user-selected cpu model doesn't support LE.
Signed-off-by: Richard Henderson <rth@twiddle.net>
[agraf: switch to POWER7 as default for BE and LE]
Signed-off-by: Alexander Graf <agraf@suse.de>
OABI arm used a software interrupt(0xef9f0001) for breakpoints.
Since 2005 gdb has used the break instruction(0xe7f001f0) for EABI.
Apparently Steel Bank Common Lisp still uses the swi instruction.
This is the kernel implementation:
http://lxr.free-electrons.com/source/arch/arm/kernel/traps.c#L598
Signed-off-by: Hunter Laux <hunterlaux@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
As a "utility", it only supported ppc, and in a way that other
tcg backends provided directly in tcg-target.h. Removing this
disparity is easier now that the two ppc backends are merged.
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Instead of getting backup auxv data from the env pointer given to main,
read it from /proc/self/auxv. We can do this at any time, so we're not
tied to any ordering wrt a call to qemu_init_auxval from main.
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The size and register information are encoded into the reserve_info field
of CPU state in the store conditional translation code. Specifically, the
size is shifted left by 5 bits (see target-ppc/translate.c gen_conditional_store).
The user-mode store conditional code erroneously extracts the size by ANDing
with a 4 bit mask; this breaks if size >= 16.
Eliminate the mask to make the extraction of size mirror its encoding.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
This allows running PPC64 little-endian in user mode if target is configured
that way. In PPC64 LE user mode we set MSR.LE during initialization.
Signed-off-by: Doug Kwan <dougkwan@google.com>
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
With the next patch, these need to be correct or VM86 tasks
have the wrong CPL. The flags are basically what the Intel VMX
documentation say is mandatory for entry into a VM86 guest.
For consistency, SMM ought to have the same flags except with
CPL=0.
Tested-by: Kevin O'Connor <kevin@koconnor.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Instead of manually calling cpu_x86_set_cpl() when the CPL changes,
check for CPL changes on calls to cpu_x86_load_seg_cache(R_CS). Every
location that called cpu_x86_set_cpl() also called
cpu_x86_load_seg_cache(R_CS), so cpu_x86_set_cpl() is no longer
required.
This fixes the SMM handler code as it was not setting/restoring the
CPL level manually.
Signed-off-by: Kevin O'Connor <kevin@koconnor.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
--enable-uname-release was a rather heavyweight hammer, as it allows
providing values less that UNAME_MINIMUM_RELEASE. Also, it affects
all built linux-user targets, which in most cases is not what user
wants.
Now that we have UNAME_MINIMUM_RELEASE for all linux-user platforms,
we can drop --enable-uname-release and the related CONFIG_UNAME_RELEASE
define.
Users can still override the variable with QEMU_UNAME=2.6.32 or -r
command line option. If distributors need to update a minimum version
for a specific target, it can be done by updating UNAME_MINIMUM_RELEASE.
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
For AArch32 exceptions, the only information provided about
the cause of an exception is the individual exception type (data
abort, undef, etc), which we store in cs->exception_index. For
AArch64, the CPU provides much more detail about the cause of
the exception, which can be found in the syndrome register.
Create a set of fields in CPUARMState which must be filled in
whenever an exception is raised, so that exception entry can
correctly fill in the syndrome register for the guest.
This includes the information which in AArch32 appears in
the DFAR and IFAR (fault address registers) and the DFSR
and IFSR (fault status registers) for data aborts and
prefetch aborts, since if we end up taking the MMU fault
to AArch64 rather than AArch32 this will need to end up
in different system registers.
This patch does a refactoring which moves the setting of the
AArch32 DFAR/DFSR/IFAR/IFSR from the point where the exception
is raised to the point where it is taken. (This is no change
for cores with an MMU, retains the existing clearly incorrect
behaviour for ARM946 of trashing the MP access permissions
registers which share the c5_data and c5_insn state fields,
and has no effect for v7M because we don't implement its
MPU fault status or address registers.)
As a side effect of the cleanup we fix a bug in the AArch64
linux-user mode code where we were passing a 64 bit fault
address through the 32 bit c6_data/c6_insn fields: it now
goes via the always-64-bit exception.vaddress.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Most targets were using offsetof(CPUFooState, breakpoints) to determine
how much of CPUFooState to clear on reset. Use the next field after
CPU_COMMON instead, if any, or sizeof(CPUFooState) otherwise.
Signed-off-by: Andreas Färber <afaerber@suse.de>
* remotes/riku/linux-user-for-upstream:
linux-user: set minimum kernel version to 2.6.32
linux-user: correct handling of break exception for MIPS
linux-user: translate signal number on return from sigtimedwait
linux-user: Implement sendmmsg syscall
linux-user: Fix getresuid, getresgid if !USE_UID16
linux-user: Don't use UID16 on AArch64
linux-user: AArch64: Implement SA_RESTORER for signal handlers
linux-user/signal.c: Fix AArch64 big-endian FP register restore
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This patch adds the Store Quadword Conditionl (stqcx.) instruction
which is introduced in Power ISA 2.07.
Signed-off-by: Tom Musta <tommusta@gmail.com>
[agraf: fix compile error when !TARGET_PPC64]
Signed-off-by: Alexander Graf <agraf@suse.de>
Exception with break instruction has not been correctly propagated as
SIGTRAP. This resolves crash issues with examples that use break
instruction on MIPS.
Signed-off-by: Petar Jovanovic <petar.jovanovic@imgtec.com>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
Signed-off-by: Janne Grunau <j@jannau.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This implement exclusive loads/stores for aarch64 along the lines of
arm32 and ppc implementations. The exclusive load remembers the address
and loaded value. The exclusive store throws an an exception which uses
those values to check for equality in a proper exclusive region.
This is not actually the architecture mandated semantics (for either
AArch32 or AArch64) but it is close enough for typical guest code
sequences to work correctly, and saves us from having to monitor all
guest stores. It's fairly easy to come up with test cases where we
don't behave like hardware - we don't for example model cache line
behaviour. However in the common patterns this works, and the existing
32 bit ARM exclusive access implementation has the same limitations.
AArch64 also implements new acquire/release loads/stores (which may be
either exclusive or non-exclusive). These imposes extra ordering
constraints on memory operations (ie they act as if they have an implicit
barrier built into them). As TCG is single-threaded all our barriers
are no-ops, so these just behave like normal loads and stores.
Signed-off-by: Michael Matz <matz@suse.de>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
In preparation for adding support for A64 load/store exclusive instructions,
widen the fields in the CPU state struct that deal with address and data values
for exclusives from 32 to 64 bits. Although in practice AArch64 and AArch32
exclusive accesses will be generally separate there are some odd theoretical
corner cases (eg you should be able to do the exclusive load in AArch32, take
an exception to AArch64 and successfully do the store exclusive there), and it's
also easier to reason about.
The changes in semantics for the variables are:
exclusive_addr -> extended to 64 bits; -1ULL for "monitor lost",
otherwise always < 2^32 for AArch32
exclusive_val -> extended to 64 bits. 64 bit exclusives in AArch32 now
use the high half of exclusive_val instead of a separate exclusive_high
exclusive_high -> is no longer used in AArch32; extended to 64 bits as
it will be needed for AArch64's pair-of-64-bit-values exclusives.
exclusive_test -> extended to 64 bits, as it is an address. Since this is
a linux-user-only field, in arm-linux-user it will always have the top
32 bits zero.
exclusive_info -> stays 32 bits, as it is neither data nor address, but
simply holds register indexes etc. AArch64 will be able to fit all its
information into 32 bits as well.
Note that the refactoring of gen_store_exclusive() coincidentally fixes
a minor bug where ldrexd would incorrectly update the first CPU register
even if the load for the second register faulted.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
The common pattern for system registers in a 64-bit capable ARM
CPU is that when in AArch32 the cp15 register is a view of the
bottom 32 bits of the 64-bit AArch64 system register; writes in
AArch32 leave the top half unchanged. The most natural way to
model this is to have the state field in the CPU struct be a
64 bit value, and simply have the AArch32 TCG code operate on
a pointer to its lower half.
For aarch64-linux-user the only registers we need to share like
this are the thread-local-storage ones. Widen their fields to
64 bits and provide the 64 bit reginfo struct to make them
visible in AArch64 state. Note that minor cleanup of the AArch64
system register encoding space means We can share the TPIDR_EL1
reginfo but need split encodings for TPIDR_EL0 and TPIDRRO_EL0.
Since we're touching almost every line in QEMU that uses the
c13_tls* fields in this patch anyway, we take the opportunity
to rename them in line with the standard ARM architectural names
for these registers.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
With this we no longer pass down envp, and thus all systems can have
the same void prototype. So also eliminate a useless thunk.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Since this is only read in cpu_copy() and linux-user has a global
cpu_model, drop the field from generic code.
Signed-off-by: Andreas Färber <afaerber@suse.de>
It is only used there and is deemed very fragile if not incorrect in its
current memcpy() form. Moving it into linux-user will allow to move
parts into target_cpu.h headers and only copy what the ABI mandates.
Signed-off-by: Andreas Färber <afaerber@suse.de>
microMIPS instructions that cause breakpoint exceptions come in
16-bit and 32-bit variants. When handling exceptions caused by
such instructions, the instruction type needs to be taken into
account when extracting the break code.
The code has also been restructured for better clarity.
Signed-off-by: Kwok Cheung Yeung <kcy@codesourcery.com>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
The binfmt_misc module can calculate the credentials and security
token according to the binary instead of to the interpreter if the
'C' flag is enabled.
To be able to execute non-readable binaries, this flag implies 'O'
flag. When 'O' flag is enabled, bintfmt_misc opens the file for
reading and pass the file descriptor to the interpreter.
References:
linux/Documentation/binfmt_misc.txt ['O' and 'C' description]
linux/fs/binfmt_misc.c linux/fs/binfmt_elf.c [ AT_EXECFD usage ]
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
The name field of MIPS_SYS isn't actually used; it's just documentation.
But adjust the umount entries to match mips/syscall_nr.h anyway.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
This patch adds support for AArch64 in all the small corners of
linux-user (primarily in image loading and startup code).
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: John Rigby <john.rigby@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1378235544-22290-22-git-send-email-peter.maydell@linaro.org
Message-id: 1368505980-17151-11-git-send-email-john.rigby@linaro.org
[PMM:
* removed some unnecessary #defines from syscall.h
* catch attempts to use a 32 bit only cpu with aarch64-linux-user
* termios stuff moved into its own patch
* we specify our minimum uname version here now
]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For newer target architectures, glibc can be picky about the kernel
version: for example, it will not run on an aarch64 system unless
the kernel reports itself as at least 3.8.0. Accommodate this by
enhancing the existing support for faking the kernel version so
that each target can optionally specify a minimum version: if
the user doesn't force a specific fake version then we will override
with the minimum required version only if the real host kernel
version is insufficient.
Use this facility to let aarch64 report a minimum of 3.8.0.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1378235544-22290-21-git-send-email-peter.maydell@linaro.org
Add the main linux-user cpu loop for AArch64. Since AArch64
has a different system call interface, doesn't need to worry
about FPA emulation and may in the future keep the prefetch/data
abort information in different system registers, it's simplest
just to use a completely separate loop from the 32 bit ARM
target, rather than peppering it with ifdefs.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1378235544-22290-14-git-send-email-peter.maydell@linaro.org