The instruction traps if the CPU is not in
Supervisor state but the helper is empty because
there is no easy way to reset all the peripherals
without resetting the CPU itself.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180104012913.30763-12-laurent@vivier.eu>
Add cache lines invalidate and cache lines push
as no-op operations, as we don't have cache.
These instructions are 68040 only.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180104012913.30763-11-laurent@vivier.eu>
move16 moves the source line to the destination line. Lines are aligned
to 16-byte boundaries and are 16 bytes long.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180104012913.30763-9-laurent@vivier.eu>
chk and chk2 compare a value to boundaries, and
trigger a CHK exception if the value is out of bounds.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180104012913.30763-8-laurent@vivier.eu>
Display the interrupts/exceptions information
in QEMU logs (-d int)
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180104012913.30763-6-laurent@vivier.eu>
As gen_helper_get_ccr() is able to compute CCR from cc_op and
flags, we don't need to flush flags before to call it.
flush_flags() and get_ccr() use COMPUTE_CCR() to compute
flags. get_ccr() computes CCR value,
whereas flush_flags update live cc_op and flags.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180104012913.30763-3-laurent@vivier.eu>
And remove update_cc_op() from gen_exception() because there is
one in gen_jmp_im().
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180104012913.30763-2-laurent@vivier.eu>
With no fixed array allocation, we can't overflow a buffer.
This will be important as optimizations related to host vectors
may expand the number of ops used.
Use QTAILQ to link the ops together.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
These are now trivial sets and tests against NULL. Unwrap.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We should not exit unless moxie_cpu_handle_mmu_fault has failed.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
cpu_restore_state officially supports being passed an address it can't
resolve the state for. As a result the checks in the helpers are
superfluous and can be removed. This makes the code consistent with
other users of cpu_restore_state.
Of course this does nothing to address what to do if cpu_restore_state
can't resolve the state but so far it seems this is handled elsewhere.
The change was made with included coccinelle script.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
[rth: Fixed up comment indentation. Added second hunk to script to
combine cpu_restore_state and cpu_loop_exit.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Rather than unsupported situations, some VM_PANIC calls actually
are caused by internal errors. Convert them to just abort.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch injects a GP fault when the guest vmexit's by executing a
vmcall instruction.
Signed-off-by: Sergio Andres Gomez Del Real <Sergio.G.DelReal@gmail.com>
Message-Id: <20170913090522.4022-15-Sergio.G.DelReal@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch refactors the event-injection code for hvf by using the
appropriate fields already provided by CPUX86State. At vmexit, it fills
these fields so that hvf_inject_interrupts can just retrieve them without
calling into hvf.
Signed-off-by: Sergio Andres Gomez Del Real <Sergio.G.DelReal@gmail.com>
Message-Id: <20170913090522.4022-14-Sergio.G.DelReal@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch implements setting the tracking of dirty vga pages, using hvf's
interface to protect guest memory. It uses the MemoryListener callback
mechanism through .log_start/stop/sync
Signed-off-by: Sergio Andres Gomez Del Real <Sergio.G.DelReal@gmail.com>
Message-Id: <20170913090522.4022-13-Sergio.G.DelReal@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch generalizes some code in cpu.c for hypervisor-based
accelerators, calling the new hvf_get_supported_cpuid where
KVM used kvm_get_supported_cpuid.
Signed-off-by: Sergio Andres Gomez Del Real <Sergio.G.DelReal@gmail.com>
Message-Id: <20170913090522.4022-12-Sergio.G.DelReal@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch implements hvf_get_supported_cpuid, which returns the set of
features supported by both the host processor and the hypervisor.
Signed-off-by: Sergio Andres Gomez Del Real <Sergio.G.DelReal@gmail.com>
Message-Id: <20170913090522.4022-11-Sergio.G.DelReal@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch makes use of the helper functions for handling xsave in
xsave_helper.c, which are shared with kvm.
Signed-off-by: Sergio Andres Gomez Del Real <Sergio.G.DelReal@gmail.com>
Message-Id: <20170913090522.4022-10-Sergio.G.DelReal@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch replaces the license header for those files that were either
GPL v2-or-v3, or GPL v2-only; the replacing license is GPL v2-or-later.
The code for task switching/handling, which is derived from KVM and
hence is GPL v2-only, is isolated in the new files (with this license)
x86_task.c/.h, and the corresponding compilation rule is added to
target/i386/hvf-utils/Makefile.objs.
Signed-off-by: Sergio Andres Gomez Del Real <Sergio.G.DelReal@gmail.com>
Message-Id: <20170913090522.4022-4-Sergio.G.DelReal@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This file begins tracking the files that will be the code base for HVF
support in QEMU. This code base is part of Google's QEMU version of
their Android emulator, and can be found at
https://android.googlesource.com/platform/external/qemu/+/emu-master-dev
This code is based on Veertu Inc's vdhh (Veertu Desktop Hosted
Hypervisor), found at https://github.com/veertuinc/vdhh. Everything is
appropriately licensed under GPL v2-or-later, except for the code inside
x86_task.c and x86_task.h, which, deriving from KVM (the Linux kernel),
is licensed GPL v2-only.
This code base already implements a very great deal of functionality,
although Google's version removed from Vertuu's the support for APIC
page and hyperv-related stuff. According to the Android Emulator Release
Notes, Revision 26.1.3 (August 2017), "Hypervisor.framework is now
enabled by default on macOS for 32-bit x86 images to improve performance
and macOS compatibility", although we better use with caution for, as the
same Revision warns us, "If you experience issues with it specifically,
please file a bug report...". The code hasn't seen much update in the
last 5 months, so I think that we can further develop the code with
occasional visiting Google's repository to see if there has been any
update.
On top of Google's code, the following changes were made:
- add code to the configure script to support the --enable-hvf argument.
If the OS is Darwin, it checks for presence of HVF in the system. The
patch also adds strings related to HVF in the file qemu-options.hx.
QEMU will only support the modern syntax style '-M accel=hvf' no enable
hvf; the legacy '-enable-hvf' will not be supported.
- fix styling issues
- add glue code to cpus.c
- move HVFX86EmulatorState field to CPUX86State, changing the
the emulation functions to have a parameter with signature 'CPUX86State *'
instead of 'CPUState *' so we don't have to get the 'env'.
Signed-off-by: Sergio Andres Gomez Del Real <Sergio.G.DelReal@gmail.com>
Message-Id: <20170913090522.4022-2-Sergio.G.DelReal@gmail.com>
Message-Id: <20170913090522.4022-3-Sergio.G.DelReal@gmail.com>
Message-Id: <20170913090522.4022-5-Sergio.G.DelReal@gmail.com>
Message-Id: <20170913090522.4022-6-Sergio.G.DelReal@gmail.com>
Message-Id: <20170905035457.3753-7-Sergio.G.DelReal@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The first call of set_cc_op() in a new translation sequence
is done with old_op set to CC_OP_DYNAMIC (-1).
This will do an out of bound access to the array cc_op_live[].
We fix that by adding an entry in cc_op_live[] for CC_OP_DYNAMIC.
Reported-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20171221160558.14151-1-laurent@vivier.eu>
It has been introduced by e6e5906b6e ("ColdFire target."),
but the content is never used.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Thomas Huth <huth@tuxfamily.org>
Message-Id: <20171220130815.20708-1-laurent@vivier.eu>
Normally we create an address space for that CPU and pass that address
space into the function. Let's just do it inside to unify address space
creations. It'll simplify my next patch to rename those address spaces.
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20171123092333.16085-3-peterx@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In commit e3af7c788b we
replaced direct calls to to cpu_ld*_code() with calls
to the x86_ld*_code() wrappers which incorporate an
advance of s->pc. Unfortunately we didn't notice that
in one place the old code was deliberately not incrementing
s->pc:
@@ -4501,7 +4528,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
static const int pp_prefix[4] = {
0, PREFIX_DATA, PREFIX_REPZ, PREFIX_REPNZ
};
- int vex3, vex2 = cpu_ldub_code(env, s->pc);
+ int vex3, vex2 = x86_ldub_code(env, s);
if (!CODE64(s) && (vex2 & 0xc0) != 0xc0) {
/* 4.1.4.6: In 32-bit mode, bits [7:6] must be 11b,
This meant we were mishandling this set of instructions.
Remove the manual advance of s->pc for the "is VEX" case
(which is now done by x86_ldub_code()) and instead rewind
PC in the case where we decide that this isn't really VEX.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-stable@nongnu.org
Reported-by: Alexandro Sanchez Bach <alexandro@phi.nz>
Message-Id: <1513163959-17545-1-git-send-email-peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
These gcc warnings are fixed:
target/i386/translate.c:4461:12: warning:
variable 'prefixes' might be clobbered by 'longjmp' or 'vfork' [-Wclobbered]
target/i386/translate.c:4466:9: warning:
variable 'rex_w' might be clobbered by 'longjmp' or 'vfork' [-Wclobbered]
target/i386/translate.c:4466:16: warning:
variable 'rex_r' might be clobbered by 'longjmp' or 'vfork' [-Wclobbered]
Tested with x86_64-w64-mingw32-gcc from Debian stretch.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Message-Id: <20171113064845.29142-1-sw@weilnetz.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The value of HV_X64_MSR_SVERSION is initialized once at vcpu init, and
is reset to zero on vcpu reset, which is wrong.
It is supposed to be a constant, so drop the field from X86CPU, set the
msr with the constant value, and don't bother getting it.
Signed-off-by: Roman Kagan <rkagan@virtuozzo.com>
Message-Id: <20171122181418.14180-4-rkagan@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Initially SINTx msrs should be in "masked" state. To ensure that
happens on *every* reset, move setting their values to
kvm_arch_vcpu_reset.
Signed-off-by: Roman Kagan <rkagan@virtuozzo.com>
Message-Id: <20171122181418.14180-3-rkagan@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Hyper-V has a notion of partition-wide MSRs. Those MSRs are read and
written as usual on each VCPU, however the hypervisor maintains a single
global value for all VCPUs. Thus writing such an MSR from any single
VCPU affects the global value that is read by all other VCPUs.
This leads to an issue during VCPU hotplug: the zero-initialzied values
of those MSRs get synced into KVM and override the global values as has
already been set by the guest.
This change makes the partition-wide MSRs only be synchronized on the
first vcpu.
Signed-off-by: Evgeny Yakovlev <eyakovlev@virtuozzo.com>
Signed-off-by: Roman Kagan <rkagan@virtuozzo.com>
Message-Id: <20171122181418.14180-2-rkagan@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Intel IceLake cpu has added new cpu features,AVX512_VBMI2/GFNI/
VAES/VPCLMULQDQ/AVX512_VNNI/AVX512_BITALG. Those new cpu features
need expose to guest VM.
The bit definition:
CPUID.(EAX=7,ECX=0):ECX[bit 06] AVX512_VBMI2
CPUID.(EAX=7,ECX=0):ECX[bit 08] GFNI
CPUID.(EAX=7,ECX=0):ECX[bit 09] VAES
CPUID.(EAX=7,ECX=0):ECX[bit 10] VPCLMULQDQ
CPUID.(EAX=7,ECX=0):ECX[bit 11] AVX512_VNNI
CPUID.(EAX=7,ECX=0):ECX[bit 12] AVX512_BITALG
The release document ref below link:
https://software.intel.com/sites/default/files/managed/c5/15/\
architecture-instruction-set-extensions-programming-reference.pdf
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <1511335676-20797-1-git-send-email-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
FPU2000 implements basic single-precision floating point operations and
can be replaced with a different implementation, like DFPU or HiFi. Move
FPU2000 opcode translators into separate functions and list them in a
separate array.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Move implementations of core opcodes into separate translation
functions. Introduce data structures for mapping opcode name to
translator function. Make an array of core opcode/translator structures.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
The canonical way of dealing with Xtensa instructions decoding and
encoding is through the libisa. Libisa is a configuration-independent
library with a stable interface plus generated configuration-specific
xtensa-modules.c file with implementations of decoding and encoding
functions. Libisa is MIT-licensed and originally disributed
xtensa-modules.c files are also MIT-licensed and are available as a
part of xtensa configuration overlay.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Currently 'entry' opcode helper accepts frame size divided by 8, as it
is encoded in the opcode. Make it more natural and accept actual frame
size instead.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
If we've already raised an exception (and set NORETURN),
do not emit unreachable code to raise a debug exception.
Note that gen_goto_tb takes single-stepping into account.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170907185057.23421-4-richard.henderson@linaro.org>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
As for other targets, cmpxchg isn't quite right for ll/sc,
suffering from an ABA race, but is sufficient to implement
portable atomic operations.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170907185057.23421-2-richard.henderson@linaro.org>
[aurel32: fix whitespace]
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
This fixes bug #1735384 while running java under qemu-sh4. When debug
was enabled it showed a problem with TCG temps. Once fixed I was able
to run java -version normally.
Cc: qemu-stable@nongnu.org
Reported-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20171206093050.25308-1-alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
applied using ./scripts/clean-includes
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
this file in include in "target/i386/hax-i386.h":
#ifdef CONFIG_WIN32
#include "target/i386/hax-windows.h"
#endif
which guaranties that sysemu/os-win32.h is previously included (CONFIG_WIN32)
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
applied using ./scripts/clean-includes
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
applied using ./scripts/clean-includes
not needed since 7ebaf79556
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
exec: housekeeping (funny since 02d0e09503)
applied using ./scripts/clean-includes
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Thanks to Laszlo Ersek for spotting the double semicolon in target/i386/kvm.c
I have trivially grepped the tree for ';;' in C files.
Suggested-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Ladi Prosek <lprosek@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
- Lots of tcg improvements: ccw hotplug is now working and we can run
a Linux kernel built for z12 under tcg
- zPCI improvements to get virtio-pci working
- get rid of the cssid restrictions for virtual and non-virtual channel
devices
- we now support 8TB+ systems
- 2.12 compat machine
- fixes and cleanups
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJaM6p9AAoJEN7Pa5PG8C+vCp4P/RXSQhetDZYxCRQw68IlX6q3
8yiYCL4vn/kaO5Ylb3+RkRFy9Wl/4JAiJLz8h0WoVSaxPIQ2nwp2l+muOFsPGVfy
ysPYMvHvobX/Odnva6uWZOdQ0TmANUVLofN8d0SHfGiL2dflrvSb3Nj2y82dv4MM
cbSiNRqvwMjfUrdZq2SK1KRjKx9jSFiqB9EnhQvJ4rBNIVneCA5ozfPSjQ5P9ZLL
ZvdnFj6lIobrdIx4P4gFeOANH/gPtipiztVqVCshyPu0Ru8XnJFx48Wwz5qfK8YE
UHojyg2z3o1ySb83EEO/cmsAgsnozT1bGxhJwfCNGxtppc3ONeoqm8RUQev12mP8
Lxmn9UwK3m+tMsVMlsUMWa4tQ4f1T4f1eeumysbbkVFKNZHFuP2oY/ybelcqLZX/
dbxwoOm0Db1Aa+EeCgJb5l7S/vQV3pYITs3JKA4NeBESsGGaYFhzk9FlDDJQDP5j
bwh2VrNxF0o1HFNbuZQsGEBZdwCHOWXAoxsoXGlCuMAk/UJSVxiULTNtHX0t4Aba
GsUuIfQx1m/JqvDYagMgq8qF9KxQlgBMofUVxWTEvCPclvJX3ku4rBzG5FVtNrpJ
oVvQrk2JORKMiKgnjNuG2FLofsHS6yDoDCX7agrNOCyJ22caAwmLFnpxg4t97JXT
KkBpwpt857plfkelqv0r
=xSZX
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/cohuck/tags/s390x-20171215-v2' into staging
s390x changes for 2.12:
- Lots of tcg improvements: ccw hotplug is now working and we can run
a Linux kernel built for z12 under tcg
- zPCI improvements to get virtio-pci working
- get rid of the cssid restrictions for virtual and non-virtual channel
devices
- we now support 8TB+ systems
- 2.12 compat machine
- fixes and cleanups
# gpg: Signature made Fri 15 Dec 2017 10:57:01 GMT
# gpg: using RSA key 0xDECF6B93C6F02FAF
# gpg: Good signature from "Cornelia Huck <conny@cornelia-huck.de>"
# gpg: aka "Cornelia Huck <huckc@linux.vnet.ibm.com>"
# gpg: aka "Cornelia Huck <cornelia.huck@de.ibm.com>"
# gpg: aka "Cornelia Huck <cohuck@kernel.org>"
# gpg: aka "Cornelia Huck <cohuck@redhat.com>"
# Primary key fingerprint: C3D0 D66D C362 4FF6 A8C0 18CE DECF 6B93 C6F0 2FAF
* remotes/cohuck/tags/s390x-20171215-v2: (46 commits)
s390-ccw-virtio: allow for systems larger that 7.999TB
s390x: change the QEMU cpu model to a stripped down z12
s390x/tcg: we already implement the Set-Program-Parameter facility
s390x/tcg: implement extract-CPU-time facility
s390x/tcg: Implement SIGNAL ADAPTER instruction
s390x/tcg: Implement STORE CHANNEL PATH STATUS
s390x/tcg: wire up SET CHANNEL MONITOR
s390x/tcg: wire up SET ADDRESS LIMIT
s390x/tcg: implement Interlocked-Access Facility 2
s390x/tcg: ASI/ASGI/ALSI/ALSGI are atomic with Interlocked-acccess facility 1
s390x/tcg: wire up STORE CHANNEL REPORT WORD
s390x/tcg: indicate value of TODPR in STCKE
s390x/tcg: implement SET CLOCK PROGRAMMABLE FIELD
s390x/tcg: fix and cleanup mcck injection
s390x/kvm: factor out build_channel_report_mcic() into cpu.h
s390x/css: attach css bridge
s390x: deprecate s390-squash-mcss machine prop
s390x/css: unrestrict cssids
s390x/pci: search for subregion inside the BARs
s390x/pci: move the memory region write from pcistg
...
# Conflicts:
# include/hw/compat.h
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
and use them in a couple of obvious places. Other macros will be used
in the model of the XIVE interrupt controller.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
When a CPU is stopped with the 'stop-self' RTAS call, its state
'halted' is switched to 1 and, in this case, the MSR is not taken into
account anymore in the cpu_has_work() routine. Only the pending
hardware interrupts are checked with their LPCR:PECE* enablement bit.
If the DECR timer fires after 'stop-self' is called and before the CPU
'stop' state is reached, the nearly-dead CPU will have some work to do
and the guest will crash. This case happens very frequently with the
not yet upstream P9 XIVE exploitation mode. In XICS mode, the DECR is
occasionally fired but after 'stop' state, so no work is to be done
and the guest survives.
I suspect there is a race between the QEMU mainloop triggering the
timers and the TCG CPU thread but I could not quite identify the root
cause. To be safe, let's disable in the LPCR all the exceptions which
can cause an exit while the CPU is in power-saving mode and reenable
them when the CPU is started.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
and use the value to define precisely the default value of the LPCR in
the helper routine cpu_ppc_set_papr()
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Daniel Henrique Barboza <danielhb@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We are good enough to boot upstream Linux kernels / Fedora 26/27. That
should be sufficient for now.
As the QEMU CPU model is migration safe, let's add compatibility code.
Generate the feature list to reduce the chance of messing things up in the
future.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171208165529.14124-1-david@redhat.com>
[CH: squashed 's390x/cpumodel: make qemu cpu model play with "none" machine'
(20171213132407.5227-1-david@redhat.com) and 's390x/tcg: don't include z13
features in the qemu model' (20171213171512.17601-1-david@redhat.com) into
patch]
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The Set-Program-Parameter facility (also known as Load-Program-Parameter
facility) provides the LPP instruction used to load the program
parameter. We already implement that instruction in TCG, so add it to our
list.
Note: Not documented in the PoP but in "The Load-Program-Parameter and
CPU-Measurement Facilities) - SA23-2260-05 document.
While at it, make the whole list ordered (according to cpu_features_def.h).
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171208160207.26494-14-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
It only provides the EXTRACT CPU TIME instruction. We can reuse the stpt
helper, which calculates the CPU timer value.
As the instruction is not privileged, but we don't have a CPU timer
value in case of linux user, we simply reuse cpu_get_host_ticks() to
produce some descending value.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171208160207.26494-13-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
KVM suppresses SIGA, setting cc=3. Let's do the same for TCG, so we're at
least equal.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171208160207.26494-12-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Just like KVM does, we should suppress this instruction:
When this instruction is not provided, it is
checked for privileged operation exception and the
instruction is suppressed by the machine
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171208160207.26494-11-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Let's just wire it up like KVM.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171208160207.26494-10-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Let's handle it just like KVM:
Depending on the model, this instruction may not be
provided. When this instruction is not provided, it is
checked for operand exception and privileged-opera-
tion exception, and then is suppressed.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171208160207.26494-9-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
With this facility, OI/OIY, NI/NIY and XI/XIY are atomic. All operate on
one byte (MO_UB). Emulate old behavior.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171208160207.26494-8-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The semantics of ASI/ASGI/ALSI/ALSGI changed. Let's implement them just
like LOAD AND ADD, so they are atomic. Emulate old behavior.
This fixes random crashes when booting a Linux kernel compiled for
z196+ with SMP + MTTCG.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171208160207.26494-7-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We were not yet using the value of the TOD Programmable Register.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171208160207.26494-5-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Needed for machine check handling inside Linux (when restoring registers).
Except for SIGP and machine checks, we don't make use of the register
yet. Sufficient for now.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171208160207.26494-4-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The architecture mode indication wasn't stored. The split of certain
64bit fields was unnecessary. Also, the complete clock comparator, not
just bit 0-55 (starting at byte 1) was stored.
We now generate a proper MCIC via the same helper we use for KVM.
There is more to clean up, but we will change the other parts later on
either way.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171208160207.26494-3-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We'll need it later on in two places. Refactor it to just indicate the
validity bits. While at it, introduce a define for the used CR14 bit (we'll
also need later on).
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171208160207.26494-2-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Only one user left, get rid of it so we don't get any new users.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171130162744.25442-17-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
All users are gone, we can finally drop it and make sure that all new
program interrupt injections are reminded of the retaddr - as they have to
use s390_program_interrupt() now.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171130162744.25442-16-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
STSI needs some more love, but let's do one step at a time.
We can now drop potential_page_fault().
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171130162744.25442-15-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We can now drop updating the cc.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171130162744.25442-13-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Now we can drop the two save statements in the translate function.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171130162744.25442-12-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Now we can drop potential_page_fault(). While at it, move the
unlock further up, looks cleaner.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171130162744.25442-11-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
As we handle the retaddr in all cases properly now, we can drop it.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171130162744.25442-10-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
s390_cpu_virt_mem_rw() must always return, so callers can react on
an exception (e.g. see ioinst_handle_stcrw()).
Therefore, using program_interrupt() is wrong. Fix that up.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171130162744.25442-9-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
s390_cpu_virt_mem_rw() must always return, so callers can react on
an exception (e.g. see ioinst_handle_stcrw()).
However, for TCG we always have to exit the cpu loop (and restore the
cpu state before that) if we injected a program interrupt. So let's
introduce and use s390_cpu_virt_mem_handle_exc() in code that is not
purely KVM.
Directly pass the retaddr we already have available in these functions.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171130162744.25442-8-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Needed to later drop potential_page_fault() from the diag TCG translate
function.
Convert program_interrupt() to s390_program_interrupt() directly, making
use of the passed address.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171130162744.25442-7-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Once we wire up TCG, we will need the retaddr to correctly inject
program interrupts. As we want to get rid of the function
program_interrupt(), convert PCI code too.
For KVM, we can simply use RA_IGNORED.
Convert program_interrupt() to s390_program_interrupt() directly, making
use of the passed address.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171130162744.25442-6-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
TCG needs the retaddr when injecting an interrupt. Let's just pass it
along and use RA_IGNORED for KVM. The value will be completely ignored for
KVM.
Convert program_interrupt() to s390_program_interrupt() directly, making
use of the passed address.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171130162744.25442-5-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
It is broken and not even wired up. We'll add a new handler soon, but
that will live somewhere else.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171130162744.25442-4-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Allows to easily convert more callers of program_interrupt() and to
easily introduce new exceptions without forgetting about the cpu state
reset.
Use s390_program_interrupt() in places where we already had the same
pattern. We will later get rid of program_interrupt().
RA != 0 checks are already done behind the scenes.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171130162744.25442-2-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
It is not used anywhere.
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
valgrind pointed out that we call KVM_S390_GET_IRQ_STATE with an
undefined value for flags. Kernels prior to 4.15 did not use that
field, and later kernels ignore it for compatibility reasons, but we
better play safe.
The same is true for SET_IRQ_STATE. We should make sure to not use the
flag field, either.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-Id: <20171122142627.73170-2-borntraeger@de.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Now that do_ats_write() is entirely in control of whether to
generate a 32-bit PAR or a 64-bit PAR, we can make it use the
correct (complicated) condition for doing so.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1512503192-2239-13-git-send-email-peter.maydell@linaro.org
[PMM: Rebased Edgar's patch on top of get_phys_addr() refactoring;
use arm_s1_regime_using_lpae_format() rather than
regime_using_lpae_format() because the latter will assert
if passed ARMMMUIdx_S12NSE0 or ARMMMUIdx_S12NSE1;
updated commit message appropriately]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
All of the callers of get_phys_addr() and arm_tlb_fill() now ignore
the FSR values they return, so we can just remove the argument
entirely.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Message-id: 1512503192-2239-12-git-send-email-peter.maydell@linaro.org
In do_ats_write(), rather than using the FSR value from get_phys_addr(),
construct the PAR values using the information in the ARMMMUFaultInfo
struct. This allows us to create a PAR of the correct format regardless
of what the translation table format is.
For the moment we leave the condition for "when should this be a
64 bit PAR" as it was previously; this will need to be fixed to
properly support AArch32 Hyp mode.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Message-id: 1512503192-2239-11-git-send-email-peter.maydell@linaro.org
Now that ARMMMUFaultInfo is guaranteed to have enough information
to construct a fault status code, we can pass it in to the
deliver_fault() function and let it generate the correct type
of FSR for the destination, rather than relying on the value
provided by get_phys_addr().
I don't think there are any cases the old code was getting
wrong, but this is more obviously correct.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Message-id: 1512503192-2239-10-git-send-email-peter.maydell@linaro.org
Make get_phys_addr_pmsav8() return a fault type in the ARMMMUFaultInfo
structure, which we convert to the FSC at the callsite.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Message-id: 1512503192-2239-9-git-send-email-peter.maydell@linaro.org
Make get_phys_addr_pmsav7() return a fault type in the ARMMMUFaultInfo
structure, which we convert to the FSC at the callsite.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Message-id: 1512503192-2239-8-git-send-email-peter.maydell@linaro.org
Make get_phys_addr_pmsav5() return a fault type in the ARMMMUFaultInfo
structure, which we convert to the FSC at the callsite.
Note that PMSAv5 does not define any guest-visible fault status
register, so the different "fsr" values we were previously
returning are entirely arbitrary. So we can just switch to using
the most appropriae fi->type values without worrying that we
need to special-case FaultInfo->FSC conversion for PMSAv5.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Message-id: 1512503192-2239-7-git-send-email-peter.maydell@linaro.org
Make get_phys_addr_v6() return a fault type in the ARMMMUFaultInfo
structure, which we convert to the FSC at the callsite.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Message-id: 1512503192-2239-6-git-send-email-peter.maydell@linaro.org
Make get_phys_addr_v6() return a fault type in the ARMMMUFaultInfo
structure, which we convert to the FSC at the callsite.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Message-id: 1512503192-2239-5-git-send-email-peter.maydell@linaro.org
Make get_phys_addr_v5() return a fault type in the ARMMMUFaultInfo
structure, which we convert to the FSC at the callsite.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Message-id: 1512503192-2239-4-git-send-email-peter.maydell@linaro.org
All the callers of arm_ldq_ptw() and arm_ldl_ptw() ignore the value
that those functions store in the fsr argument on failure: if they
return failure to their callers they will always overwrite the fsr
value with something else.
Remove the argument from these functions and S1_ptw_translate().
This will simplify removing fsr from the calling functions.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Message-id: 1512503192-2239-3-git-send-email-peter.maydell@linaro.org
Currently get_phys_addr() and its various subfunctions return
a hard-coded fault status register value for translation
failures. This is awkward because FSR values these days may
be either long-descriptor format or short-descriptor format.
Worse, the right FSR type to use doesn't depend only on the
translation table being walked -- some cases, like fault
info reported to AArch32 EL2 for some kinds of ATS operation,
must be in long-descriptor format even if the translation
table being walked was short format. We can't get those cases
right with our current approach.
Provide fields in the ARMMMUFaultInfo struct which allow
get_phys_addr() to provide sufficient information for a caller to
construct an FSR value themselves, and utility functions which do
this for both long and short format FSR values, as a first step in
switching get_phys_addr() and its children to only returning the
failure cause in the ARMMMUFaultInfo struct.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Message-id: 1512503192-2239-2-git-send-email-peter.maydell@linaro.org
Implement the TT instruction which queries the security
state and access permissions of a memory location.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1512153879-5291-8-git-send-email-peter.maydell@linaro.org
For the TT instruction we're going to need to do an MPU lookup that
also tells us which MPU region the access hit. This requires us
to do the MPU lookup without first doing the SAU security access
check, so pull the MPU lookup parts of get_phys_addr_pmsav8()
out into their own function.
The TT instruction also needs to know the MPU region number which
the lookup hit, so provide this information to the caller of the
MPU lookup code, even though get_phys_addr_pmsav8() doesn't
need to know it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1512153879-5291-7-git-send-email-peter.maydell@linaro.org
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
The TT instruction is going to need to look up the MMU index
for a specified security and privilege state. Refactor the
existing arm_v7m_mmu_idx_for_secstate() into a version that
lets you specify the privilege state and one that uses the
current state of the CPU.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1512153879-5291-6-git-send-email-peter.maydell@linaro.org
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
For M profile, we currently have an mmu index MNegPri for
"requested execution priority negative". This fails to
distinguish "requested execution priority negative, privileged"
from "requested execution priority negative, usermode", but
the two can return different results for MPU lookups. Fix this
by splitting MNegPri into MNegPriPriv and MNegPriUser, and
similarly for the Secure equivalent MSNegPri.
This takes us from 6 M profile MMU modes to 8, which means
we need to bump NB_MMU_MODES; this is OK since the point
where we are forced to reduce TLB sizes is 9 MMU modes.
(It would in theory be possible to stick with 6 MMU indexes:
{mpu-disabled,user,privileged} x {secure,nonsecure} since
in the MPU-disabled case the result of an MPU lookup is
always the same for both user and privileged code. However
we would then need to rework the TB flags handling to put
user/priv into the TB flags separately from the mmuidx.
Adding an extra couple of mmu indexes is simpler.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1512153879-5291-5-git-send-email-peter.maydell@linaro.org
When we added the ARMMMUIdx_MSUser MMU index we forgot to
add it to the case statement in regime_is_user(), so we
weren't treating it as unprivileged when doing MPU lookups.
Correct the omission.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1512153879-5291-4-git-send-email-peter.maydell@linaro.org
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
In ARMv7M the CPU ignores explicit writes to CONTROL.SPSEL
in Handler mode. In v8M the behaviour is slightly different:
writes to the bit are permitted but will have no effect.
We've already done the hard work to handle the value in
CONTROL.SPSEL being out of sync with what stack pointer is
actually in use, so all we need to do to fix this last loose
end is to update the condition we use to guard whether we
call write_v7m_control_spsel() on the register write.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1512153879-5291-3-git-send-email-peter.maydell@linaro.org
For v8M it is possible for the CONTROL.SPSEL bit value and the
current stack to be out of sync. This means we need to update
the checks used in reads and writes of the PSP and MSP special
registers to use v7m_using_psp() rather than directly checking
the SPSEL bit in the control register.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1512153879-5291-2-git-send-email-peter.maydell@linaro.org
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
The refactoring of commit 296e5a0a6c has a nasty bug:
it accidentally dropped the generation of code to raise
the UNDEF exception when disas_thumb2_insn() returns nonzero.
This means that 32-bit Thumb2 instruction patterns that
ought to UNDEF just act like nops instead. This is likely
to break any number of things, including the kernel's "disable
the FPU and use the UNDEF exception to identify when to turn
it back on again" trick.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1513006964-3371-1-git-send-email-peter.maydell@linaro.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Occasionally in Linux guests on x86_64 we're seeing logs like:
ppc_set_irq: 0x55b4e0d562f0 n_IRQ 8 level 1 => pending 00000100req 00000004
when they should read:
ppc_set_irq: 0x55b4e0d562f0 n_IRQ 8 level 1 => pending 00000100req 00000002
The "00000004" is CPU_INTERRUPT_EXITTB yet the code calls
cpu_interrupt(cs, CPU_INTERRUPT_HARD) ("00000002") in this function
just before the log message. Something is causing the HARD bit setting
to get lost.
The knock on effect of losing that bit is the decrementer timer interrupts
don't get delivered which causes the guest to sit idle in its idle handler
and 'hang'.
The issue occurs due to races from code which sets CPU_INTERRUPT_EXITTB.
Rather than poking directly into cs->interrupt_request, that code needs to:
a) hold BQL
b) use the cpu_interrupt() helper
This patch fixes the call sites to do this, fixing the hang. The calls
are made from a variety of contexts so a helper function is added to handle
the necessary locking. This can likely be improved and optimised in the future
but it ensures the code is correct and doesn't lockup as it stands today.
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The msr invalidation code (commits 993eb and 2360b) inverts all
bits except MSR_TGPR and MSR_HVB. On non PowerPC 601 processors
this leads to incorrect change of excp_prefix in hreg_store_msr()
function. The problem is that new msr value get multiplied by msr_mask
and inverted msr does not, thus values of MSR_EP bit in new msr value
and inverted msr are distinct, so that excp_prefix changes but should
not.
Signed-off-by: Kurban Mallachiev <mallachiev@ispras.ru>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
cpu->compat_pvr is used to store the current compat mode of the cpu.
On the receiving side during incoming migration we check compatibility
with the compat mode by calling ppc_set_compat(). However we fail to set
the compat mode with the hypervisor since the "new" compat mode doesn't
differ from the current (due to a "cpu->compat_pvr != compat_pvr" check).
This means that kvm runs the vcpus without a compat mode, which is the
incorrect behaviour. The implication being that a compatibility mode
will never be in effect after migration.
To fix this so that the compat mode is correctly set with the
hypervisor, store the desired compat mode and reset cpu->compat_pvr to
zero before calling ppc_set_compat().
Fixes: 5dfaa532 ("ppc: fix ppc_set_compat() with KVM PR")
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Migration of a system under stress (for example, with
"stress-ng --numa 2") triggers on the destination
some kernel watchdog messages like:
NMI watchdog: BUG: soft lockup - CPU#0 stuck for 3489660870s!
NMI watchdog: BUG: soft lockup - CPU#1 stuck for 3489660884s!
This problem appears with the changes introduced by
42043e4 spapr: clock should count only if vm is running
I think this commit only triggers the problem.
Kernel computes the soft lockup duration using the
Virtual Timebase register (VTB), not using the Timebase
Register (TBR, the one 42043e4 stops).
It appears VTB is not migrated, so this patch adds it in
the list of the SPRs to migrate, and fixes the problem.
For the migration, I've tested a migration from qemu-2.8.0 and
pseries-2.8.0 to a patched master (qemu-2.11.0-rc1). The received
VTB is 0 (as is it not initialized by qemu-2.8.0), but the value
seems to be ignored by KVM and a non zero VTB is used by the kernel.
I have no explanation for that, but as the original problem appears
only with SMP system under stress I suspect some problems in KVM
(I think because VTB is shared by all threads of a core).
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
In do_ats_write(), rather than using extended_addresses_enabled() to
decide whether the value we get back from get_phys_addr() is a 64-bit
format PAR or a 32-bit one, use arm_s1_regime_using_lpae_format().
This is not really the correct answer, because the PAR format
depends on the AT instruction being used, not just on the
translation regime. However getting this correct requires a
significant refactoring, so that get_phys_addr() returns raw
information about the fault which the caller can then assemble
into a suitable FSR/PAR/syndrome for its purposes, rather than
get_phys_addr() returning a pre-formatted FSR.
However this change at least improves the situation by making
the PAR work correctly for address translation operations done
at AArch64 EL2 on the EL2 translation regime. In particular,
this is necessary for Xen to be able to run in our emulation,
so this seems like a safer interim fix given that we are in freeze.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Message-id: 1509719814-6191-1-git-send-email-peter.maydell@linaro.org
The CPU ID registers ID_AA64PFR0_EL1, ID_PFR1_EL1 and ID_PFR1
have a field for reporting presence of GICv3 system registers.
We need to report this field correctly in order for Xen to
work as a guest inside QEMU emulation. We mustn't incorrectly
claim the sysregs exist when they don't, though, or Linux will
crash.
Unfortunately the way we've designed the GICv3 emulation in QEMU
puts the system registers as part of the GICv3 device, which
may be created after the CPU proper has been realized. This
means that we don't know at the point when we define the ID
registers what the correct value is. Handle this by switching
them to calling a function at runtime to read the value, where
we can fill in the GIC field appropriately.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Message-id: 1510066898-3725-1-git-send-email-peter.maydell@linaro.org
Currently, multi threaded TCG with > 1 VCPU gets stuck during IPL, when
the bios tries to switch to the loaded kernel via DIAG 308.
As run_on_cpu() is used, we run into a deadlock after handling the reset.
We need the iolock (just like KVM).
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171116170526.12643-4-david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Looks like the last fix + cleanup introduced another bug. (for now Linux
guests don't seem to care) - we store the crs into ars.
Fixes: 947a38bd6f ("s390x/kvm: fix and cleanup storing CPU status")
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171116170526.12643-2-david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Use of GETPC must be restricted to those functions that are
directly called from TCG generated code.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Fixes: 2399d4e7ce
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We use raw memory primitives along the !parallel_cpus paths in order to
simplify the endianness handling. Because of that, we did not benefit
from the generic changes to cpu_ldst_user_only_template.h.
The simplest fix is to manipulate helper_retaddr here.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Fixes the following warning when compiling with gcc 5.4.0 with -O1
optimizations and --enable-debug:
target/arm/translate-a64.c: In function ‘aarch64_tr_translate_insn’:
target/arm/translate-a64.c:2361:8: error: ‘post_index’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
if (!post_index) {
^
target/arm/translate-a64.c:2307:10: note: ‘post_index’ was declared here
bool post_index;
^
target/arm/translate-a64.c:2386:8: error: ‘writeback’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
if (writeback) {
^
target/arm/translate-a64.c:2308:10: note: ‘writeback’ was declared here
bool writeback;
^
Note that idx comes from selecting 2 bits, and therefore its value
can be at most 3.
Signed-off-by: Emilio G. Cota <cota@braap.org>
Acked-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1510087611-1851-1-git-send-email-cota@braap.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We added the entry to insn-data.def, but failed to update op_risbg
to match. No need to special-case the imask inversion, since that
is already ~0 for RISBG (and now RISBGN).
Fixes: 375ee58bed
Fixes: https://bugs.launchpad.net/qemu/+bug/1701798 (s390x part)
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20171107145546.767-1-richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Tested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
This feature is present for some targets in the bfd disassembler(s).
Implement it generically for all capstone users.
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
While trying to make KVM PR usable again, commit 5dfaa532ae introduced a
regression: the current compat_pvr value is passed to KVM instead of the
new one. This means that we always pass 0 instead of the max-cpu-compat
PVR during the initial machine reset. And at CAS time, we either pass
the PVR from the command line or even don't call kvmppc_set_compat() at
all, ie, the PCR will not be set as expected.
For example if we start a big endian fedora26 guest in power7 compat
mode on a POWER8 host, we get this in the guest:
$ cat /proc/cpuinfo
processor : 0
cpu : POWER7 (architected), altivec supported
clock : 4024.000000MHz
revision : 2.0 (pvr 004d 0200)
timebase : 512000000
platform : pSeries
model : IBM pSeries (emulated by qemu)
machine : CHRP IBM pSeries (emulated by qemu)
MMU : Hash
but the guest can still execute POWER8 instructions, and the following
program succeeds:
int main()
{
asm("vncipher 0,0,0"); // ISA 2.07 instruction
}
Let's pass the new compat_pvr to kvmppc_set_compat() and the program fails
with SIGILL as expected.
Reported-by: Nageswara R Sastry <rnsastry@linux.vnet.ibm.com>
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
In the user-mode-only version of sparc_cpu_handle_mmu_fault(),
we must save the fault address for a data fault into the CPU
state's mmu registers, because the code in linux-user/main.c
expects to find it there in order to populate the si_addr
field of the guest siginfo.
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
For AArch32 LDREXD and STREXD, architecturally the 32-bit word at the
lowest address is always Rt and the one at addr+4 is Rt2, even if the
CPU is big-endian. Our implementation does these with a single
64-bit store, so if we're big-endian then we need to put the two
32-bit halves together in the opposite order to little-endian,
so that they end up in the right places. We were trying to do
this with the gen_aa32_frob64() function, but that is not correct
for the usermode emulator, because there there is a distinction
between "load a 64 bit value" (which does a BE 64-bit access
and doesn't need swapping) and "load two 32 bit values as one
64 bit access" (where we still need to do the swapping, like
system mode BE32).
Fixes: https://bugs.launchpad.net/qemu/+bug/1725267
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1509622400-13351-1-git-send-email-peter.maydell@linaro.org
On a successful address translation instruction, PAR is supposed to
contain cacheability and shareability attributes determined by the
translation. We previously returned 0 for these bits (in line with the
general strategy of ignoring caches and memory attributes), but some
guest OSes may depend on them.
This patch collects the attribute bits in the page-table walk, and
updates PAR with the correct attributes for all LPAE translations.
Short descriptor formats still return 0 for these bits, as in the
prior implementation.
Signed-off-by: Andrew Baumann <Andrew.Baumann@microsoft.com>
Message-id: 20171031223830.4608-1-Andrew.Baumann@microsoft.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
WFI/E are often, but not always, 4 bytes long. When they are, we need to
set ARM_EL_IL_SHIFT in the syndrome register.
Pass the instruction length to HELPER(wfi), use it to decrement pc
appropriately and to pass an is_16bit flag to syn_wfx, which sets
ARM_EL_IL_SHIFT if needed.
Set dc->insn in both arm_tr_translate_insn and thumb_tr_translate_insn.
Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
Message-id: alpine.DEB.2.10.1710241055160.574@sstabellini-ThinkPad-X260
[PMM: move setting of dc->insn for Thumb so it is correct for 32 bit insns]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
- missing \r in the BIOS console output
- CPU type name is now "s390x-cpu"
- fixup for the host-model on z14 and older machine versions
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAABAgAGBQJZ9uQHAAoJEBF7vIC1phx8E8AP/0aBPGS3FHrAOf/mDQchh+Do
+wKtEx3pLRnV8UyurmkaRKgUaemhC+JvGxqgvgq/BrhOqVnVtHdRQ3MQt5ThtDbQ
kI4GXJn5nENujg4PtjTuCs+7xLC/4QXPyiHc59HAaRwXG0jBQn8mdGZj6ATvXLWp
ePrGRAbbLclXyKLPl4SG9DR52L2OakSTz3m7qSHHS3pHBYrDjY9U0A1luIHOfyFq
FXNF/NZ2fRvtDhip4PFC8tnns5G61JWrhnha4+wsePaAUgVTUP8jqoyaRLPxHxti
4SDpVDCzRO+hmAGrgOEZOGjYxQ/zbxsRhT4qVyZ/KfKiNYqrjOsI5n+X3fiPZl3m
y64TXSK9kW3qWB/ZNTgEYmtZHrmFtcQnL8bPgIJRXySedX+K6UIX384GBSf6NVSz
2FUl/O/cksiMumHQfVPemRjC58gy3buCMGUTokENOyd5LzwjkLcdiJn6NmH5yo5A
zNxVaISZjJ5zd5TCp8tQ+LNvXLHk9/Aah9yoirBpYnyoafwZaZtk6qLh/LN7isS/
F+o58YP+SgEldynfbW9HKcjxut6COFqP9lDhALXwrWQGGjZIfSm9NMhrH/xNpoWW
HEMpQ24QbqL7CKoMQVatuHDlPaIjxIWTdTepC0MBTmvoTwGJ8PbX0FIs7L0U344V
Ze7HEhaR8KpQ0uKSBLVh
=ir9v
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/borntraeger/tags/s390x-20171030' into staging
s390x: fixups for 2.11
- missing \r in the BIOS console output
- CPU type name is now "s390x-cpu"
- fixup for the host-model on z14 and older machine versions
# gpg: Signature made Mon 30 Oct 2017 08:34:15 GMT
# gpg: using RSA key 0x117BBC80B5A61C7C
# gpg: Good signature from "Christian Borntraeger (IBM) <borntraeger@de.ibm.com>"
# Primary key fingerprint: F922 9381 A334 08F9 DBAB FBCA 117B BC80 B5A6 1C7C
* remotes/borntraeger/tags/s390x-20171030:
s390-*.img: update s390 bios with latest fixes
s390-ccw: print carriage return with new lines
s390x/kvm: use cpu model for gscb on compat machines
target/s390x: change CPU type name to "s390x-cpu"
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Starting a guest with
<os>
<type arch='s390x' machine='s390-ccw-virtio-2.9'>hvm</type>
</os>
<cpu mode='host-model'/>
on an IBM z14 results in
"qemu-system-s390x: Some features requested in the CPU model are not
available in the configuration: gs"
This is because guarded storage is fenced for compat machines that did
not have guarded storage support. While this prevents future migration
abort (by not starting the guest at all), not being able to start a
"host-model" guest is very much unexpected. As it turns out, even if we
would modify libvirt to not expand the cpu model to contain "gs" for
compat machines, it cannot guarantee that a migration will succeed. For
example if the kernel changes its features (or the user has nested=1 on
one host but not on the other) the migration will fail nevertheless. So
instead of fencing "gs" for machines <= 2.9 lets allow it for all
machine types that support the CPU model. This will make "host-model"
runnable all the time, while relying on the CPU model to reject invalid
migration attempts. We also need to change the migration for guarded
storage.
Additional discussions about host-model are still pending but are out
of scope of this patch.
Suggested-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: David Hildenbrand <david@redhat.com>
Acked-by: Cornelia Huck <Cornelia Huck <cohuck@redhat.com>
Acked-by: Halil Pasic <pasic@linux.vnet.ibm.com>
For now, e.g. host-s390-cpu wasn't exposed to the user. cpu-add, -cpu
and the CPU model qmp interfaces didn't care about the actual type,
as that information was hidden.
This changed with CPU hotplug via device_add. Now the type is visible to
the user. Before we get that supported in a stable version, this is our
last chance to change it.
So change it from "s390-cpu" to "s390x-cpu", to match the architecture
name. Example names are then e.g. z14-s390x-cpu or qemu-s390x-cpu.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171020115803.14093-1-david@redhat.com>
Acked-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
introduce SPARC_CPU_TYPE_NAME macro and use it to
construct cpu type names.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <1507211474-188400-32-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
introduce TRICORE_CPU_TYPE_NAME macro and use it to construct
cpu type names. While at it move cpu type_infos into one
array and register it directly with type_init_from_array()
instead of custom tricore_cpu_register_types()/cpu_register()
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <1507211474-188400-30-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
use new UNICORE32_CPU_TYPE_NAME to compose CPU type
name and get rid of intermediate
UniCore32CPUInfo/uc32_cpu_register_types()
which is replaced by static TypeInfo array and
type_init_from_array()
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <1507211474-188400-28-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
use new XTENSA_CPU_TYPE_NAME to compose CPU type name
to bring xtensa in line with all other targets that
will similar macro.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <1507211474-188400-25-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
the field contains upper-cased cpu model name and is used
for printing supported cpu model names for '-cpu help'.
Considering that cpu model lookup in superh_cpu_class_by_name()
is case-insensitive, we can drop upper-casing when
printing supported cpus list and use cpu type directly
to do the same by cutting out SUPERH_CPU_TYPE_SUFFIX from
typename.
It allows to remove SuperHCPUClass::name, which practically
duplicates names defined by TYPE_SH*_CPU definitions and
simplify sh*_class_init()/SuperHCPUClass a bit.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <1507211474-188400-24-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
currently for sh4 cpu_model argument for '-cpu' option
could be either 'cpu model' name or cpu_typename.
however typically '-cpu' takes 'cpu model' name and
cpu type for sh4 target isn't advertised publicly
('-cpu help' prints only 'cpu model' names) so we
shouldn't care about this use case (it's more of a bug).
1. Drop '-cpu cpu_typename' to align with the rest of
targets.
2. Compose searched for typename from cpu model and use
it with object_class_by_name() directly instead of
over-complicated
object_class_get_list()
g_slist_find_custom() + superh_cpu_name_compare()
With #1 droped, #2 could be used for both lookups which
simplifies superh_cpu_class_by_name() quite a bit.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Acked-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <1507211474-188400-23-git-send-email-imammedo@redhat.com>
[ehabkost: Include fixup sent by Igor]
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
introduce SUPERH_CPU_TYPE_NAME macro and use it to construct
cpu type names. While at it move cpu type_infos into one
array and register it directly with type_init_from_array()
instead of custom superh_cpu_register_types()
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <1507211474-188400-22-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
use new OPENRISC_CPU_TYPE_NAME to compose CPU type name and get
rid of intermediate OpenRISCCPUInfo/openrisc_cpu_register_types()
which is replaced by static TypeInfo array.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <1507211474-188400-18-git-send-email-imammedo@redhat.com>
Acked-by: Stafford Horne <shorne@gmail.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
introduce MOXIE_CPU_TYPE_NAME macro and consistently use it
to construct cpu type names. While at it replace dynamic
cpu type name composition with static data.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <1507211474-188400-16-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
It 'works' with default CPU only because of bug in
moxie_cpu_class_by_name() where it treats cpu_model
as type name and default cpu_model also happens to be
type name. But specifying explicitly cpu on CLI,
ex: '-cpu MoxieLite', makes QEMU fail since
moxie_cpu_class_by_name() doesn't traslate cpu_model
to cpu type and fails to find corresponding object class.
Fix moxie_cpu_class_by_name() to do proper
cpu_model -> cpu type
translation and fix default cpu_model to be cpu_model
instead of being typename.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <1507211474-188400-15-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
use new M68K_CPU_TYPE_NAME to compose CPU type names
and get rid of intermediate M68kCPUInfo/register_cpu_type()
which is replaced by static TypeInfo array.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Acked-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <1507211474-188400-12-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
introduce LM32_CPU_TYPE_NAME macro and consistently use it
to construct cpu type names. While at it replace dynamic
cpu type name composition with static data.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Acked-by: Michael Walle <michael@walle.cc>
Message-Id: <1507211474-188400-9-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
replace ambiguous TYPE macro with a new CRIS_CPU_TYPE_NAME
and use it consistently in the code.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <1507211474-188400-7-git-send-email-imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Introduce ALPHA_CPU_TYPE_NAME macro to replace rather ununique
TYPE macro that alpha uses. With new macro it will follow
the same naming convention as other targets.
While at it put scattered TypeInfo into one array which places
type desriptions at one place and reduces code a bit.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Acked-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <1507211474-188400-5-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Now that every target is using the disas_set_info hook,
the flags argument is unused. Remove it.
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The Capstone disassembler has its own big-endian fixup.
Doing this twice does not work, of course. Move our current
fixup from target/arm/cpu.c to disas/arm.c.
This makes read_memory_inner_func unused and can be removed.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This is identical for each target. So, move the initialization to
common code. Move the variable itself out of tcg_ctx and name it
cpu_env to minimize changes within targets.
This also means we can remove tcg_global_reg_new_{ptr,i32,i64},
since there are no longer global-register temps created by targets.
Reviewed-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Groundwork for supporting multiple TCG contexts.
The core of this patch is this change to tcg/tcg.h:
> -extern TCGContext tcg_ctx;
> +extern TCGContext tcg_init_ctx;
> +extern TCGContext *tcg_ctx;
Note that for now we set *tcg_ctx to whatever TCGContext is passed
to tcg_context_init -- in this case &tcg_init_ctx.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Thereby decoupling the resulting translated code from the current state
of the system.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Thereby decoupling the resulting translated code from the current state
of the system.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Thereby decoupling the resulting translated code from the current state
of the system.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Thereby decoupling the resulting translated code from the current state
of the system.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Thereby decoupling the resulting translated code from the current state
of the system.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Thereby decoupling the resulting translated code from the current state
of the system.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Thereby decoupling the resulting translated code from the current state
of the system.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Convert all existing readers of tb->cflags to tb_cflags, so that we
use atomic_read and therefore avoid undefined behaviour in C11.
Note that the remaining setters/getters of the field are protected
by tb_lock, and therefore do not need conversion.
Luckily all readers access the field via 'tb->cflags' (so no foo.cflags,
bar->cflags in the code base), which makes the conversion easily
scriptable:
FILES=$(git grep 'tb->cflags' target include/exec/gen-icount.h \
accel/tcg/translator.c | cut -f1 -d':' | sort | uniq)
perl -pi -e 's/([^.>])tb->cflags/$1tb_cflags(tb)/g' $FILES
perl -pi -e 's/([a-z->.]*)(->|\.)tb->cflags/tb_cflags($1$2tb)/g' $FILES
Then manually fixed the few errors that checkpatch reported.
Compile-tested for all targets.
Suggested-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Move target cpu tcg initialization to common code,
called from cpu_exec_realizefn.
Acked-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
When we used structures for TCGv_*, we needed a macro in order to
perform a comparison. Now that we use pointers, this is just clutter.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The GET and MAKE functions weren't really specific enough.
We now have a full complement of functions that convert exactly
between temporaries, arguments, tcgv pointers, and indices.
The target/sparc change is also a bug fix, which would have affected
a host that defines TCG_TARGET_HAS_extr[lh]_i64_i32, i.e. MIPS64.
Reviewed-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
In order to support multicore system we move some of the previously
static state variables into the state of each core.
On the other hand in order to allow timers to be synced between each
code the ttcr (tick timer count register) is moved out of the core.
This is not as per real hardware spec which has a separate timer counter
per core, but it seems the most simple way to keep each clock in sync.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
Previously coreid and numcores were hard coded as 0 and 1 respectively
as OpenRISC QEMU did not have multicore support.
Multicore support is now being added so these registers need to have
configured values.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
This is a neat way to implement low address protection, whereby
only the first 512 bytes of the first two pages (each 4096 bytes) of
every address space are protected.
Store a tec of 0 for the access exception, this is what is defined by
Enhanced Suppression on Protection in case of a low address protection
(Bit 61 set to 0, rest undefined).
We have to make sure to to pass the access address, not the masked page
address into mmu_translate*().
Drop the check from testblock. So we can properly test this via
kvm-unit-tests.
This will check every access going through one of the MMUs.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171016202358.3633-3-david@redhat.com>
[CH: restored error message for access register mode]
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Simplify the error handling of the MSCH. Let the code detecting the
condition tell (in a less ambiguous way) how it's to be handled. No
changes in behavior.
Signed-off-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Message-Id: <20171017140453.51099-8-pasic@linux.vnet.ibm.com>
[CH: fix return code for fctl != 0]
Reviewed-by: Dong Jia Shi <bjsdjshi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Simplify the error handling of the HSCH. Let the code detecting the
condition tell (in a less ambiguous way) how it's to be handled. No
changes in behavior.
Signed-off-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Message-Id: <20171017140453.51099-7-pasic@linux.vnet.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Dong Jia Shi <bjsdjshi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Simplify the error handling of the CSCH. Let the code detecting the
condition tell (in a less ambiguous way) how it's to be handled. No
changes in behavior.
Signed-off-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Message-Id: <20171017140453.51099-6-pasic@linux.vnet.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Dong Jia Shi <bjsdjshi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Simplify the error handling of the XSCH. Let the code detecting the
condition tell (in a less ambiguous way) how it's to be handled. No
changes in behavior.
Signed-off-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Message-Id: <20171017140453.51099-5-pasic@linux.vnet.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Dong Jia Shi <bjsdjshi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Simplify the error handling of the SSCH and RSCH handler avoiding
arbitrary and cryptic error codes being used to tell how the instruction
is supposed to end. Let the code detecting the condition tell how it's
to be handled in a less ambiguous way. It's best to handle SSCH and RSCH
in one go as the emulation of the two shares a lot of code.
For passthrough this change isn't pure refactoring, but changes the way
kernel reported EFAULT is handled. After clarifying the kernel interface
we decided that EFAULT shall be mapped to unit exception. Same goes for
unexpected error codes and absence of required ORB flags.
Signed-off-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Message-Id: <20171017140453.51099-4-pasic@linux.vnet.ibm.com>
Tested-by: Dong Jia Shi <bjsdjshi@linux.vnet.ibm.com>
[CH: cosmetic changes]
Reviewed-by: Dong Jia Shi <bjsdjshi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
s390-virtio-ccw.c is the sole user of s390x_new_cpu(),
so move this helper there.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <1508253203-119237-1-git-send-email-imammedo@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
object_new() returns cpu with refcnt == 1 and after realize
refcnt == 2*. s390x_new_cpu() as an owner of the first refcnt
should have released it on exit in both cases (on error and
success) to avoid it leaking. Do so for both cases.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <1508247680-98800-2-git-send-email-imammedo@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
When we try to start a CPU with a WAIT PSW, we have to take care that
TCG will actually try to continue executing instructions.
We must therefore really only unhalt the CPU if we don't have a WAIT
PSW. Also document the special order for restart interrupts, which
load a new PSW and change the state to operating.
To keep KVM working, simply don't have a look at the WAIT bit when
loading the PSW. Otherwise the behavior of a restart interrupt when
a CPU stopped would be changed.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-31-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Refactor it to use s390_get_feat_block(). Directly write into the mapped
lowcore with stfl and make sure it is really only compiled if needed.
While at it, add an alignment check for STFLE and avoid
potential_page_fault() by properly restoring the CPU state.
Due to s390_get_feat_block(), we will now also indicate the
"Configuration-z-architectural-mode", which is with new SIGP code the
right thing to do.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-30-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Nothing hindering us anymore from unlocking the restart code (used for
NMI).
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-29-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
As we properly implement it, allow to enable it.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-28-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
This effectively enables experimental SMP support. Floating interrupts are
still a mess, so allow it but print a big warning. There also seems
to be a problem with CPU hotplug (after the main loop started).
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-27-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
[CH: changed insn-data.def as pointed out by Richard]
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Thanks to Aurelien Jarno for doing this in his prototype.
We can flush the whole TLB as this should happen really rarely.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-26-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Implement them like KVM implements/handles them. Both can only be
triggered via SIGP instructions. RESET has (almost) the lowest priority if
the CPU is running, and the highest if the CPU is STOPPED. This is handled
in SIGP code already. On delivery, we only have to care about the
"CPU running" scenario.
STOP is defined to be delivered after all other interrupts have been
delivered. Therefore it has the actual lowest priority.
As both can wake up a CPU if sleeping, indicate them correctly to
external code (e.g. cpu_has_work()).
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-25-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Mostly analogous to the kernel/KVM version (so I assume the checks are
correct :) ). As a preparation for TCG.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-24-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
As preparation for TCG.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-23-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
As preparation for TCG.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-22-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Add it as preparation for TCG. Sensing could later be done completely
lockless.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-21-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Preparation for TCG, for KVM is this is completely handled in the
kernel.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-20-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
For KVM, the KVM module decides when a STOP can be performed (when the
STOP interrupt can be processed). Factor it out so we can use it
later for TCG.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-19-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We want to use the same code base for TCG, so let's cleanly factor it
out.
The sigp mutex is currently not really needed, as everything is
protected by the iothread mutex. But this could change later, so leave
it in place and initialize it properly from common code.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-17-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Preparation for moving it out of kvm.c.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-16-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Called from SIGP code to be factored out, so let's move it. Add a
FIXME for TCG code in the future.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-15-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Factor it out into s390_store_status(), to be used also by TCG later on.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-14-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Preparation for factoring it out into !kvm code.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-13-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
No need to pass kvm_run. Pass parameters alphabetically ordered.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-12-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
KVM handles the wait PSW itself and triggers a WAIT ICPT in case it
really wants to sleep (disabled wait).
This will later allow us to change the order of loading a restart
interrupt and setting a CPU to OPERATING on SIGP RESTART without
changing KVM behavior.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-11-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
If we encounter a WAIT PSW, we have to halt immediately. Using
cpu_loop_exit() at this point feels wrong. Simply leaving
cs->exception_index set doesn't result in an immediate stop.
This is also necessary to properly handle SIGP STOP interrupts later.
The CPU_INTERRUPT_HALT will be processed immediately and properly set
the CPU to halted (also resetting cs->exception_index to EXCP_HLT)
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-10-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
This will now also detect crashes under TCG. We can directly use
cpu->env.psw.addr instead of kvm_run, as we do a cpu_synchronize_state().
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-9-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Going to OPERATING here looks wrong. A CPU should even never be
!OPERATING at this point. Unhalting will already be done in
cpu_handle_halt() if there is work, so we can drop this statement
completely.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-8-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Interrupts can't wake such CPUs up. SIGP from other CPUs has to be used
to toggle the state.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-7-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We can now let go of INTERRUPT_EXT. When cr0 changes, we have to
revalidate if we now have a pending external interrupt, just like
when the PSW (or SYSTEM MASK only) changes.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-6-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Currently, enabling/disabling of interrupts is not really supported.
Let's improve interrupt handling code by explicitly checking for
deliverable interrupts only. This is the first step. Checking for
external interrupt subclasses will be done next.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-5-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Preparation for new TCG SIGP code. Especially also prepare for
indicating that another external call is already pending.
Take care of interrupt priority.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-4-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
There are still some leftovers from old virtio interrupts in there.
Most importantly, we don't have to queue service interrupts anymore.
Just like KVM, we can simply multiplex the SCLP service interrupts and
avoid the queue.
Also, now only valid parameters/cpu_addr will be stored on service
interrupts.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-3-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
External interrupts are currently all handled like floating external
interrupts, they are queued. Let's prepare for a split of floating
and local interrupts by turning INTERRUPT_EXT into a mask.
While we can have various floating external interrupts of one kind, there
is usually only one (or a fixed number) of the local external interrupts.
So turn INTERRUPT_EXT into a mask and properly indicate the kind of
external interrupt. Floating interrupts will have to moved out of
one CPU instance later once we have SMP support.
The only floating external interrupts used right now are SERVICE
interrupts, so let's use that name. Following patches will clean up
SERVICE interrupt injection.
This get's rid of the ugly special handling for cpu timer and clock
comparator interrupts. And we really only store the parameters as
defined by the PoP.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-2-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
use generic cpu_model parsing introduced by
(6063d4c0f vl.c: convert cpu_model to cpu type and set of global properties before machine_init())
it allows to:
* replace sPAPRMachineClass::tcg_default_cpu with
MachineClass::default_cpu_type
* drop cpu_parse_cpu_model() from hw/ppc/spapr.c and reuse
one in vl.c
* simplify spapr_get_cpu_core_type() by removing
not needed anymore recurrsion since alias look up
happens earlier at vl.c and spapr_get_cpu_core_type()
works only with resulted from that cpu type.
* spapr no more needs to parse/depend on being phased out
MachineState::cpu_model, all tha parsing done by generic
code and target specific callback.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
[dwg: Correct minor compile error]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
next commit will drop ppc_cpu_lookup_alias() declaration from header
and make it static which will break its last user ppc_cpu_class_by_name()
since ppc_cpu_class_by_name() defined before ppc_cpu_lookup_alias().
To avoid this move ppc_cpu_lookup_alias() right before
ppc_cpu_class_by_name().
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
consolidate 'host' core type registration by moving it from
KVM specific code into spapr_cpu_core.c, similar like it's
done in x86 target.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
replace sPAPRCPUCoreClass::cpu_class with cpu type name
since it were needed just to get that at points it were
accessed.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
there is a dedicated callback CPUClass::parse_features
which purpose is to convert -cpu features into a set of
global properties AND deal with compat/legacy features
that couldn't be directly translated into CPU's properties.
Create ppc variant of it (ppc_cpu_parse_featurestr) and
move 'compat=val' handling from spapr_cpu_core.c into it.
That removes a dependency of board/core code on cpu_model
parsing and would let to reuse common -cpu parsing
introduced by 6063d4c0
Set "max-cpu-compat" property only if it exists, in practice
it should limit 'compat' hack to spapr machine and allow
to avoid including machine/spapr headers in target/ppc/cpu.c
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
For POWER ISA v3.0, the XER bit CA32 needs to be set by the shift
right algebraic instructions whenever the CA bit is to be set. This
change affects the following instructions:
* Shift Right Algebraic Word (sraw[.])
* Shift Right Algebraic Word Immediate (srawi[.])
* Shift Right Algebraic Doubleword (srad[.])
* Shift Right Algebraic Doubleword Immediate (sradi[.])
Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
At the moment the only POWER9 model which is listed in qemu is v1.0 (aka
"DD1"). This is a very early (read, buggy) version which will never be
released to the public - it was included in qemu only for the convenience
of those doing bringup on the early silicon. For bonus points, we actually
had its PVR incorrect in the table (0x004e0000 instead of 0x004e0100). We
also never actually implemented the differences in behaviour (read, bugs)
that marked DD1 in qemu.
Now that we know the PVR for the substantially better v2.0 (DD2) chip,
include it and make it the default POWER9 in qemu. For the time being we
leave the DD1 definition in place for the poor souls (read, me) who still
need to work with DD1 hardware.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We don't have any 460 or 460F CPUs in QEMU, so the init functions
are just dead code. Let's simply remove them (translate_init.c
is already big enough without them).
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Besides being more correct, arbitrarily long instruction allow the
generation of a translation block that spans three pages. This
confuses the generator and even allows ring 3 code to poison the
translation block cache and inject code into other processes that are
in guest ring 3.
This is an improved (and more invasive) fix for commit 30663fd ("tcg/i386:
Check the size of instruction being translated", 2017-03-24). In addition
to being more precise (and generating the right exception, which is #GP
rather than #UD), it distinguishes better between page faults and too long
instructions, as shown by this test case:
#include <sys/mman.h>
#include <string.h>
#include <stdio.h>
int main()
{
char *x = mmap(NULL, 8192, PROT_READ|PROT_WRITE|PROT_EXEC,
MAP_PRIVATE|MAP_ANON, -1, 0);
memset(x, 0x66, 4096);
x[4096] = 0x90;
x[4097] = 0xc3;
char *i = x + 4096 - 15;
mprotect(x + 4096, 4096, PROT_READ|PROT_WRITE);
((void(*)(void)) i) ();
}
... which produces a #GP without the mprotect, and a #PF with it.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
These take care of advancing s->pc, and will provide a unified point
where to check for the 15-byte instruction length limit.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This should be done by all target and, since commit 53f6672bcf
("gen-icount: use tcg_ctx.tcg_env instead of cpu_env", 2017-06-30),
is causing the NIOS2 target to hang.
This is because the test for "should I exit to the main loop"
was being done with the correct offset to the icount decrementer,
but using TCG temporary 0 (the frame pointer) rather than the
env pointer.
Cc: qemu-stable@nongnu.org
Cc: Marek Vasut <marex@denx.de>
Reported-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
It is used by all PPC targets; we can give the directory its own
Makefile.objs file, and include it directly from target/ppc.
target/s390 can do the same when it starts using it.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The real kernel has TASK_SIZE as 0x7c000000, due to quirks with
a couple of SH parts. But nominally user-space is limited to 2GB.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20170708025030.15845-4-rth@twiddle.net>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
We had a check using TARGET_VIRT_ADDR_SPACE_BITS to make sure
that the allocation coming in from the command-line option was
not too large, but that didn't include target-specific knowledge
about other restrictions on user-space.
Remove several target-specific hacks in linux-user/main.c.
For MIPS and Nios, we can replace them with proper adjustments
to the respective target's TARGET_VIRT_ADDR_SPACE_BITS definition.
For ARM, we had no existing ifdef but I suspect that the current
default value of 0xf7000000 was chosen with this in mind. Define
a workable value in linux-user/arm/, and also document why the
special case is required.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20170708025030.15845-3-rth@twiddle.net>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
The common situation of the SG instruction is that it is
executed from S&NSC memory by a CPU in NS state. That case
is handled by v7m_handle_execute_nsc(). However the instruction
also has defined behaviour in a couple of other cases:
* SG instruction in NS memory (behaves as a NOP)
* SG in S memory but CPU already secure (clears IT bits and
does nothing else)
* SG instruction in v8M without Security Extension (NOP)
These can be implemented in translate.c.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1507556919-24992-10-git-send-email-peter.maydell@linaro.org
A few Thumb instructions are always unconditional even inside an
IT block (as opposed to being UNPREDICTABLE if used inside an
IT block): BKPT, the v8M SG instruction, and the A profile
HLT (debug halt) instruction.
This means we need to suppress the jump-over-instruction-on-condfail
code generation (though the IT state still advances as usual and
subsequent insns in the IT block may be conditional).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1507556919-24992-9-git-send-email-peter.maydell@linaro.org
Recent changes have left insn_crosses_page() more complicated
than it needed to be:
* it's only called from thumb_tr_translate_insn() so we know
for certain that we're looking at a Thumb insn
* the caller's check for dc->pc >= dc->next_page_start - 3
means that dc->pc can't possibly be 4 aligned, so there's
no need to check that (the check was partly there to ensure
that we didn't treat an ARM insn as Thumb, I think)
* we now have thumb_insn_is_16bit() which lets us do a precise
check of the length of the next insn, rather than opencoding
an inaccurate check
Simplify it down to just loading the first half of the insn
and calling thumb_insn_is_16bit() on it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1507556919-24992-8-git-send-email-peter.maydell@linaro.org
Refactor the Thumb decode to do the loads of the instruction words at
the top level rather than only loading the second half of a 32-bit
Thumb insn in the middle of the decode.
This is simple apart from the awkward case of Thumb1, where the
BL/BLX prefix and suffix instructions live in what in Thumb2 is the
32-bit insn space. To handle these we decode enough to identify
whether we're looking at a prefix/suffix that we handle as a 16 bit
insn, or a prefix that we're going to merge with the following suffix
to consider as a 32 bit insn. The translation of the 16 bit cases
then moves from disas_thumb2_insn() to disas_thumb_insn().
The refactoring has the benefit that we don't need to pass the
CPUARMState* down into the decoder code any more, but the major
reason for doing this is that some Thumb instructions must be always
unconditional regardless of the IT state bits, so we need to know the
whole insn before we emit the "skip this insn if the IT bits and cond
state tell us to" code. (The always unconditional insns are BKPT,
HLT and SG; the last of these is 32 bits.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1507556919-24992-7-git-send-email-peter.maydell@linaro.org
The code which implements the Thumb1 split BL/BLX instructions
is guarded by a check on "not M or THUMB2". All we really need
to check here is "not THUMB2" (and we assume that elsewhere too,
eg in the ARCH(6T2) test that UNDEFs the Thumb2 insns).
This doesn't change behaviour because all M profile cores
have Thumb2 and so ARM_FEATURE_M implies ARM_FEATURE_THUMB2.
(v6M implements a very restricted subset of Thumb2, but we
can cross that bridge when we get to it with appropriate
feature bits.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1507556919-24992-6-git-send-email-peter.maydell@linaro.org
Secure function return happens when a non-secure function has been
called using BLXNS and so has a particular magic LR value (either
0xfefffffe or 0xfeffffff). The function return via BX behaves
specially when the new PC value is this magic value, in the same
way that exception returns are handled.
Adjust our BX excret guards so that they recognize the function
return magic number as well, and perform the function-return
unstacking in do_v7m_exception_exit().
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1507556919-24992-5-git-send-email-peter.maydell@linaro.org
Implement the BLXNS instruction, which allows secure code to
call non-secure code.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1507556919-24992-4-git-send-email-peter.maydell@linaro.org
Implement the SG instruction, which we emulate 'by hand' in the
exception handling code path.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1507556919-24992-3-git-send-email-peter.maydell@linaro.org
Add the M profile secure MMU index values to the switch in
get_a32_user_mem_index() so that LDRT/STRT work correctly
rather than asserting at translate time.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1507556919-24992-2-git-send-email-peter.maydell@linaro.org