Replace the uint64 softfloat-specific typedef with uint64_t.
This change was made with
find include fpu target-* -name '*.[ch]' | xargs sed -i -e 's/\buint64\b/uint64_t/g'
together with manual removal of the typedef definition, and
manual undoing of some mis-hits where macro arguments were
being used for token pasting rather than as a type.
Note that the target-mips/kvm.c and target-s390x/kvm.c changes are fixing
code that should not have been using the uint64 type in the first place.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Acked-by: Leon Alrae <leon.alrae@imgtec.com>
Acked-by: James Hogan <james.hogan@imgtec.com>
Message-id: 1452603315-27030-3-git-send-email-peter.maydell@linaro.org
Only one of three architectures implementing qmp-dump-guest-memory write
qemu notes. And, another architecture (arm/aarch64) is coming, which
won't use them either. Make the common implementation truly common.
(No functional change.)
Signed-off-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1452542185-10914-3-git-send-email-drjones@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The s390-virtio machine has been removed; remove the associated devices
as well.
hw/s390x/s390-virtio-bus.c and hw/s390x/s390-virtio-bus.h
have been deleted and removed from hw/s390x/Makefile.objs
virtio-size has no more meaning for the modern machine
and has been removed from helper.c and cpu.h
virtio-serial-s390 belonging to the old machine is
being removed from vl.c
Signed-off-by: Pierre Morel <pmorel@linux.vnet.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
In some cases, the same message is printed both on stderr and in the log.
Avoid duplicate output in the default case where stderr _is_ the log,
and standardize this to stderr+log where it used to use stdio+log.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
On hugetlbfs CMMA will not be useful as every ESSA instruction will trap.
So don't offer CMMA to guests with a hugepages backing.
Signed-off-by: Dominik Dingel <dingel@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Some targets already had this within their logic, but make sure
it's present for all targets.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Move the target_disas() s390 specifics to the CPUClass::disas_set_info()
hook and delete the #ifdef specific code in disas.c.
Cc: Alexander Graf <agraf@suse.de>
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Acked-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andreas Färber <afaerber@suse.de>
The cmma reset is per VM, so we don't need a cpu object. We can
directly make use of kvm_state, as it is already available when
the reset is called. By moving the cmma reset in our machine reset
function, we can avoid a manual reset handler.
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Initializing VM crypto in initial cpu reset has multiple problems
1. We call the exact same function #VCPU times, although one time is enough
2. On SIGP initial cpu reset, we exchange the wrapping key while
other VCPUs are running. Bad!
3. It is simply wrong. According to the Pop, a reset happens only during a
clear reset.
So, we have to reset the keys
- on modified clear reset
- on load clear (QEMU reset - via machine reset)
- on qemu start (via machine reset)
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Both s390 machines unconditionally create an ipl device, so no need to
handle the missing case.
Now we can also change s390_ipl_update_diag308() to return void.
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Instead of using magic values when building the machine check
interruption code, add some defines as by chapter 11-14 in the PoP.
This should make it easier to catch problems like the missing vector
register validity bit ("s390x/kvm: Fix vector validity bit in device
machine checks"), and less hassle should we want to generate machine
checks beyond the channel reports we currently support.
Acked-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Device hotplugs trigger a crw machine check. All machine checks
have validity bits for certain register types. With vector support
we also have to claim that vector registers are valid.
This is a band-aid suitable for stable. Long term we should
create the full mcic value dynamically depending on the active
features in the kernel interrupt handler.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
In-kernel ITS emulation on ARM64 will require to supply requester IDs.
These IDs can now be retrieved from the device pointer using new
pci_requester_id() function.
This patch adds pci_dev pointer to KVM GSI routing functions and makes
callers passing it.
x86 architecture does not use requester IDs, but hw/i386/kvm/pci-assign.c
also made passing PCI device pointer instead of NULL for consistency with
the rest of the code.
Signed-off-by: Pavel Fedin <p.fedin@samsung.com>
Message-Id: <ce081423ba2394a4efc30f30708fca07656bc500.1444916432.git.p.fedin@samsung.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Several devices don't survive object_unref(object_new(T)): they crash
or hang during cleanup, or they leave dangling pointers behind.
This breaks at least device-list-properties, because
qmp_device_list_properties() needs to create a device to find its
properties. Broken in commit f4eb32b "qmp: show QOM properties in
device-list-properties", v2.1. Example reproducer:
$ qemu-system-aarch64 -nodefaults -display none -machine none -S -qmp stdio
{"QMP": {"version": {"qemu": {"micro": 50, "minor": 4, "major": 2}, "package": ""}, "capabilities": []}}
{ "execute": "qmp_capabilities" }
{"return": {}}
{ "execute": "device-list-properties", "arguments": { "typename": "pxa2xx-pcmcia" } }
qemu-system-aarch64: /home/armbru/work/qemu/memory.c:1307: memory_region_finalize: Assertion `((&mr->subregions)->tqh_first == ((void *)0))' failed.
Aborted (core dumped)
[Exit 134 (SIGABRT)]
Unfortunately, I can't fix the problems in these devices right now.
Instead, add DeviceClass member cannot_destroy_with_object_finalize_yet
to mark them:
* Hang during cleanup (didn't debug, so I can't say why):
"realview_pci", "versatile_pci".
* Dangling pointer in cpus: most CPUs, plus "allwinner-a10", "digic",
"fsl,imx25", "fsl,imx31", "xlnx,zynqmp", because they create such
CPUs
* Assert kvm_enabled(): "host-x86_64-cpu", host-i386-cpu",
"host-powerpc64-cpu", "host-embedded-powerpc-cpu",
"host-powerpc-cpu" (the powerpc ones can't currently reach the
assertion, because the CPUs are only registered when KVM is enabled,
but the assertion is arguably in the wrong place all the same)
Make qmp_device_list_properties() fail cleanly when the device is so
marked. This improves device-list-properties from "crashes, hangs or
leaves dangling pointers behind" to "fails". Not a complete fix, just
a better-than-nothing work-around. In the above reproducer,
device-list-properties now fails with "Can't list properties of device
'pxa2xx-pcmcia'".
This also protects -device FOO,help, which uses the same machinery
since commit ef52358 "qdev-monitor: include QOM properties in -device
FOO, help output", v2.2. Example reproducer:
$ qemu-system-aarch64 -machine none -device pxa2xx-pcmcia,help
Before:
qemu-system-aarch64: .../memory.c:1307: memory_region_finalize: Assertion `((&mr->subregions)->tqh_first == ((void *)0))' failed.
After:
Can't list properties of device 'pxa2xx-pcmcia'
Cc: "Andreas Färber" <afaerber@suse.de>
Cc: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: Anthony Green <green@moxielogic.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Cc: Blue Swirl <blauwirbel@gmail.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: Jia Liu <proljc@gmail.com>
Cc: Leon Alrae <leon.alrae@imgtec.com>
Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michael Walle <michael@walle.cc>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Richard Henderson <rth@twiddle.net>
Cc: qemu-ppc@nongnu.org
Cc: qemu-stable@nongnu.org
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <1443689999-12182-10-git-send-email-armbru@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJWFOjwAAoJEK0ScMxN0CebJb0IAJT1JFDVkxsdkOAB6ZsM7xlj
8INCI00ayO/nK4U93haO/i4d5kCwYdPNtv98f1NDsSoUsmw+DJTzDiZQ0qvXd+bD
byF8XYqNyIbft9C1MdUch2v4jT29QZPOO0Fpcfy6/yOHTs9SyYyiC9dSddIiXdd/
MOTvbOENWYwnf+8U57kfQQfaJffLdcyOPQJseMo8S81bmhg7ZUqRw7r7L1GC6vih
2hBPAmv9uo+c9qAzOyquNeVfktfhJmO+DAMpedJ/fu1VQxwP9/HYnA/ijBWVrjBJ
2D1EmfzFtTMdPVl1q/K1dpryMM7XAb2QztFGZAYEMWjC1ackKJP289YqC+7V2ug=
=PYK3
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/rth/tags/pull-tcg-20151007' into staging
Do away with TB retranslation
# gpg: Signature made Wed 07 Oct 2015 10:42:08 BST using RSA key ID 4DD0279B
# gpg: Good signature from "Richard Henderson <rth7680@gmail.com>"
# gpg: aka "Richard Henderson <rth@redhat.com>"
# gpg: aka "Richard Henderson <rth@twiddle.net>"
* remotes/rth/tags/pull-tcg-20151007: (26 commits)
tcg: Adjust CODE_GEN_AVG_BLOCK_SIZE
tcg: Check for overflow via highwater mark
tcg: Allocate a guard page after code_gen_buffer
tcg: Emit prologue to the beginning of code_gen_buffer
tcg: Remove tcg_gen_code_search_pc
tcg: Remove gen_intermediate_code_pc
tcg: Save insn data and use it in cpu_restore_state_from_tb
tcg: Pass data argument to restore_state_to_opc
tcg: Add TCG_MAX_INSNS
target-*: Drop cpu_gen_code define
tcg: Merge cpu_gen_code into tb_gen_code
target-sparc: Add npc state to insn_start
target-sparc: Remove gen_opc_jump_pc
target-sparc: Split out gen_branch_n
target-sparc: Tidy gen_branch_a interface
target-cris: Mirror gen_opc_pc into insn_start
target-sh4: Add flags state to insn_start
target-s390x: Add cc_op state to insn_start
target-mips: Add delayed branch state to insn_start
target-i386: Add cc_op state to insn_start
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
It is no longer used, so tidy up everything reached by it.
This includes the gen_opc_* arrays, the search_pc parameter
and the inline gen_intermediate_code_internal functions.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The gen_opc_* arrays are already redundant with the data stored in
the insn_start arguments. Transition restore_state_to_opc to use
data from the latter.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Adjust all translators to respect it.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This symbol no longer exists.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reduce the boilerplate required for each target. At the same time,
move the test for breakpoint after calling tcg_gen_insn_start.
Note that arm and aarch64 do not use cpu_breakpoint_test, but still
move the inline test down after tcg_gen_insn_start.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This does tidy the icount test common to all targets.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
While we're at it, emit the opcode adjacent to where we currently
record data for search_pc. This puts gen_io_start et al on the
"correct" side of the marker.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
With an eye toward making it mandatory.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
According to the Pop:
"Subsystem reset operates only on those elements in the configuration
which are not CPUs".
As this is what we actually do, let's simply rename the function.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Message-Id: <1443689387-34473-6-git-send-email-jfrei@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Let's expose some virtual/fake registers as virtualization specific
registers.
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Message-Id: <1443689387-34473-3-git-send-email-jfrei@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
The bootloader can just pass EM_S390 directly, as that
is architecture specific code.
This removes another architecture specific definition from the global
namespace.
Cc: Richard Henderson <rth@twiddle.net>
Cc: Alexander Graf <agraf@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Acked-By: Riku Voipio <riku.voipio@linaro.org>
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* qemu_mutex_lock_iothread "No such process" fix
* cutils: qemu_strto* wrappers
* iohandler.c simplification
* Many other fixes and misc patches.
And some MTTCG work (with Emilio's fixes squashed):
* Signal-free TCG kick
* Removing spinlock in favor of QemuMutex
* User-mode emulation multi-threading fixes/docs
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQEcBAABCAAGBQJV8Tk7AAoJEL/70l94x66Ds3QH/3bi0RRR2NtKIXAQrGo5tfuD
NPMu1K5Hy+/26AC6mEVNRh4kh7dPH5E4NnDGbxet1+osvmpjxAjc2JrxEybhHD0j
fkpzqynuBN6cA2Gu5GUNoKzxxTmi2RrEYigWDZqCftRXBeO2Hsr1etxJh9UoZw5H
dgpU3j/n0Q8s08jUJ1o789knZI/ckwL4oXK4u2KhSC7ZTCWhJT7Qr7c0JmiKReaF
JEYAsKkQhICVKRVmC8NxML8U58O8maBjQ62UN6nQpVaQd0Yo/6cstFTZsRrHMHL3
7A2Tyg862cMvp+1DOX3Bk02yXA+nxnzLF8kUe0rYo6llqDBDStzqyn1j9R0qeqA=
=nB06
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
* Support for jemalloc
* qemu_mutex_lock_iothread "No such process" fix
* cutils: qemu_strto* wrappers
* iohandler.c simplification
* Many other fixes and misc patches.
And some MTTCG work (with Emilio's fixes squashed):
* Signal-free TCG kick
* Removing spinlock in favor of QemuMutex
* User-mode emulation multi-threading fixes/docs
# gpg: Signature made Thu 10 Sep 2015 09:03:07 BST using RSA key ID 78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>"
* remotes/bonzini/tags/for-upstream: (44 commits)
cutils: work around platform differences in strto{l,ul,ll,ull}
cpu-exec: fix lock hierarchy for user-mode emulation
exec: make mmap_lock/mmap_unlock globally available
tcg: comment on which functions have to be called with mmap_lock held
tcg: add memory barriers in page_find_alloc accesses
remove unused spinlock.
replace spinlock by QemuMutex.
cpus: remove tcg_halt_cond and tcg_cpu_thread globals
cpus: protect work list with work_mutex
scripts/dump-guest-memory.py: fix after RAMBlock change
configure: Add support for jemalloc
add macro file for coccinelle
configure: factor out adding disas configure
vhost-scsi: fix wrong vhost-scsi firmware path
checkpatch: remove tests that are not relevant outside the kernel
checkpatch: adapt some tests to QEMU
CODING_STYLE: update mixed declaration rules
qmp: Add example usage of strto*l() qemu wrapper
cutils: Add qemu_strtoull() wrapper
cutils: Add qemu_strtoll() wrapper
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is set to true when the index is for an instruction fetch
translation.
The core get_page_addr_code() sets it, as do the SOFTMMU_CODE_ACCESS
acessors.
All targets ignore it for now, and all other callers pass "false".
This will allow targets who wish to split the mmu index between
instruction and data accesses to do so. A subsequent patch will
do just that for PowerPC.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Message-Id: <1439796853-4410-2-git-send-email-benh@kernel.crashing.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
There are pieces of guest panic handling code
that can be shared in one generic function.
These code replaced by call qemu_system_guest_panicked().
Signed-off-by: Andrey Smetanin <asmetanin@virtuozzo.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: Andreas Färber <afaerber@suse.de>
Message-Id: <1435924905-8926-10-git-send-email-den@openvz.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Let's support reading and writing of control registers for kvm and tcg.
We have to take care of flushing the tlb (tcg) and pushing the changed
registers into kvm.
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
s390 guest initialization is modified to make use of new s390-storage-keys
device. Old code that globally allocated storage key array is removed.
The new device enables storage key access for kvm guests.
Cache storage key QOM objects in frequently used helper functions to avoid a
performance hit every time we use one of these functions.
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Jason J. Herne <jjherne@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Remove un-needed usages of ENV_GET_CPU() by converting the APIs to use
CPUState pointers and retrieving the env_ptr as minimally needed.
Scripted conversion for target-* change:
for I in target-*/cpu.h; do
sed -i \
's/\(^int cpu_[^_]*_exec(\)[^ ][^ ]* \*s);$/\1CPUState *cpu);/' \
$I;
done
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
The callers (most of them in target-foo/cpu.c) to this function all
have the cpu pointer handy. Just pass it to avoid an ENV_GET_CPU() from
core code (in exec.c).
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Michael Walle <michael@walle.cc>
Cc: Leon Alrae <leon.alrae@imgtec.com>
Cc: Anthony Green <green@moxielogic.com>
Cc: Jia Liu <proljc@gmail.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: Blue Swirl <blauwirbel@gmail.com>
Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Add an Error argument to cpu_exec_init() to let users collect the
error. This is in preparation to change the CPU enumeration logic
in cpu_exec_init(). With the new enumeration logic, cpu_exec_init()
can fail if cpu_index values corresponding to max_cpus have already
been handed out.
Since all current callers of cpu_exec_init() are from instance_init,
use error_abort Error argument to abort in case of an error.
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
current_number being shift left by more than 32 bits, we can't use a
simple int. Similarly use an int64_t type for the input binary value,
to not get the -2^31 case wrong. Finally don't initialize shift to 4,
it's already done in the for loop.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
A break is missing in the EXECUTE instruction, when executing the
TRANSLATE AND TEST instruction.
Reported-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-By: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
The MOVE LONG instruction should pad the destination operand with the
byte from bit positions 32-39 of the source length (r2 + 1), not with
the same byte in the source address.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
* unlocked MMIO support in KVM
* support for compilation with ICC
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQEcBAABCAAGBQJVmnuoAAoJEL/70l94x66DKTUH/RFrc20KXRkn/Pb/8qHY/wFz
Wt3YaS5VYPHElHbxHSdpwlV3K50FAX4QaC25Dnw4dsTelyxe5k7od+I7x8PQxD9v
3N+mFFF1BV6PqXTPVnUCnb14EXprJX524E97O6Z3lDGcwSLHDxeveSCk3IvMFErz
JzP3vtigSvtdPPQXlGcndP/r1EXeVjgNIsZ+NKaI/kmoSz1fHFrCN8hTnrxA9RSI
ZPhfmgHI5EMFtAf/HiZID6GSHOHajgeRT2bIiiy1okS++no0uRZlVMvcnFNPZHoG
e9XCGBXJSdmCoi7sIgShXirKszxYkRTbCyxxjz6aYfhrQzo0h+Yn9OPuvQrgynE=
=+YEv
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
* more of Peter Crosthwaite's multiarch preparation patches
* unlocked MMIO support in KVM
* support for compilation with ICC
# gpg: Signature made Mon Jul 6 13:59:20 2015 BST using RSA key ID 78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* remotes/bonzini/tags/for-upstream:
exec: skip MMIO regions correctly in cpu_physical_memory_write_rom_internal
Stop including qemu-common.h in memory.h
kvm: Switch to unlocked MMIO
acpi: mark PMTIMER as unlocked
kvm: Switch to unlocked PIO
kvm: First step to push iothread lock out of inner run loop
memory: let address_space_rw/ld*/st* run outside the BQL
exec: pull qemu_flush_coalesced_mmio_buffer() into address_space_rw/ld*/st*
memory: Add global-locking property to memory regions
main-loop: introduce qemu_mutex_iothread_locked
main-loop: use qemu_mutex_lock_iothread consistently
Fix irq route entries exceeding KVM_MAX_IRQ_ROUTES
cpu-defs: Move out TB_JMP defines
include/exec: Move tb hash functions out
include/exec: Move standard exceptions to cpu-all.h
cpu-defs: Move CPU_TEMP_BUF_NLONGS to tcg
memory_mapping: Rework cpu related includes
cutils: allow compilation with icc
qemu-common: add VEC_OR macro
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Including qemu-common.h from other header files is generally a bad
idea, because it means it's very easy to end up with a circular
dependency. For instance, if we wanted to include memory.h from
qom/cpu.h we'd end up with this loop:
memory.h -> qemu-common.h -> cpu.h -> cpu-qom.h -> qom/cpu.h -> memory.h
Remove the include from memory.h. This requires us to fix up a few
other files which were inadvertently getting declarations indirectly
through memory.h.
The biggest change is splitting the fprintf_function typedef out
into its own header so other headers can get at it without having
to include qemu-common.h.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <1435933104-15216-1-git-send-email-peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Whenever we touch the access control registers, we have to make sure that
the values will make it into kvm. Otherwise the change will simply be lost.
When synchronizing qemu and kvm, a normal KVM_PUT_RUNTIME_STATE does not take
care of these registers. Let's simply trigger a KVM_PUT_FULL_STATE sync,
so the values will directly be written to kvm. The performance overhead can
be ignored and this is much cleaner than manually writing these registers to kvm
via our two supported ways.
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
This opens the path to get rid of the iothread lock on vmexits in KVM
mode. On x86, the in-kernel irqchips has to be used because we otherwise
need to synchronize APIC and other per-cpu state accesses that could be
changed concurrently.
Regarding pre/post-run callbacks, s390x and ARM should be fine without
specific locking as the callbacks are empty. MIPS and POWER require
locking for the pre-run callback.
For the handle_exit callback, it is non-empty in x86, POWER and s390.
Some POWER cases could do without the locking, but it is left in
place for now.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <1434646046-27150-7-git-send-email-pbonzini@redhat.com>
In particular, don't include it into headers.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Luiz Capitulino <lcapitulino@redhat.com>
disas does not need to access the CPU env for any reason. Change the
APIs to accept CPU pointers instead. Small change pattern needs to be
applied to all target translate.c. This brings us closer to making
disas.o a common-obj and less architecture specific in general.
Cc: Richard Henderson <rth@twiddle.net>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Michael Walle <michael@walle.cc>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Leon Alrae <leon.alrae@imgtec.com>
Cc: Jia Liu <proljc@gmail.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Acked-by: Luiz Capitulino <lcapitulino@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
This patch adds support for PER Breaking-Event-Address register. Like
real hardware, it save the current PSW address when the PSW address is
changed by an instruction. We have to take care of optimizations QEMU
does, a branch to the next instruction is still a branch.
This register is copied to low core memory when a program exception
happens.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
For the instruction-fetch nullification event, we just reuse the
existing instruction-fetch code and trigger the exception immediately
in that case.
There is no need to save the CPU state in the TCG code as it has been
saved by the previous instruction before calling the per_check_exception
helper.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
This PER event happens each time the STURA or STURG instructions are
used. As they use helpers, we can just save the event in the PER code
there, if enabled.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
For the PER storage-alteration event we can use the QEMU watchpoint
infrastructure. When PER is enabled or PER control register changed we
enable the corresponding watchpoints. When a watchpoint arises we can
save the event. Unfortunately the current code does not provide the
address space used to trigger the watchpoint. For now we assume it comes
from the default ASC.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
For the PER instruction-fetch, we can't use the QEMU breakpoint
infrastructure as it triggers for a single address and not a full
address range, and as it actually stop before the instruction and
not before.
We therefore call an helper with the just fetched instruction address,
which check if the address is within the PER address range. If it is
the case, an event is recorded and will be signaled through an
exception.
Note that we implement here the PER-3 behaviour, that is an invalid
opcode is not considered as an instruction fetch. Without PER-3 this
behavious is undefined.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
For the PER successful-branching event support, we can't rely on any
QEMU infrastucture. We therefore call an helper in all places where
a branch can be taken. We have to pay attention to the branch to next
case, as it's still a taken branch.
We don't need to care about the cases using goto_tb, as we have disabled
them in the previous patch.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch add basic support to generate PER exceptions. It adds two
fields to the cpu structure to record for the PER address and PER
code & ATMID values. When an exception is triggered and a PER event is
pending, the two PER values are copied to the lowcore area.
At the end of an instruction, an helper is checking for a possible
pending PER event and triggers an exception in that case. For that to
work with branches, we need to disable TB chaining when PER is
activated. Fortunately it's already in the TB flags.
Finally in case of a SERVICE CALL exception, we need to trigger the PER
exception immediately after.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
This function checks if an address is in between the PER starting
address and the PER ending address, taking care of a possible
address range loop.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
This function returns the ATMID field that is stored in the
per_perc_atmid lowcore entry.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
mvc_fast_memmove is bypassing the softmmu functions, getting the
physical source and destination addresses using the mmu_translate
function and accessing the corresponding physical memory. This
prevents watchpoints to work correctly.
Instead use the tlb_vaddr_to_host function to get the host addresses
corresponding to the guest source and destination addresses through the
softmmu code and fallback to the byte level code in case the
corresponding address are not in the QEMU TLB or being examined through
a watchpoint. As a bonus it works even for area crossing pages by
splitting the are into chunks contained in a single page, bringing some
performances improvements. We can therefore remove the 8-byte
loads/stores method, as it is now quite unlikely to be used.
At the same time change the name of the function to fast_memmove as it's
not specific to mvc and use the same argument order as the C memmove
function.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
mvc_fast_memset is bypassing the softmmu functions, getting the
physical address using the mmu_translate function and accessing the
corresponding physical memory. This prevents watchpoints to work
correctly.
Instead use the tlb_vaddr_to_host function to get the host address
corresponding to the guest address through the softmmu code and fallback
to the byte level code in case the corresponding address is not in the
QEMU TLB or being examined through a watchpoint. As a bonus it works
even for area crossing pages by splitting the are into chunks contained
in a single page, bringing some performances improvements.
At the same time change the name of the function to fast_memset as it's
not specific to mvc and use the same argument order as the C memset
function.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch adds a function to adjust the length of a transfer so that
it doesn't cross a page boundary in softmmu mode. It does nothing in
user mode.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
The code handling the I/O instructions for KVM decodes the instruction
itself. In TCG mode also pass the full instruction word to the helpers.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
DIAG IPL is already implemented for KVM, but not wired from TCG. For
that change the format of the instruction so that we can get R1 and R3
numbers in addition to the function code.
The diag function can change plenty of things, including CC, so we
should enter with a static CC. Also it doesn't set the value of general
register 2 to 0 as in the current code. We also need to exit the CPU
loop after a reset, which means a new PSW.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
The s390_cpu_initial_reset function zeroes a big part of the CPU state
structure, including CPU_COMMON, and thus the QEMU TLB structure. As
they should not be initialized with zeroes only, we need to call the
tlb_flush to initialize it correctly.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
env->io_index[] should be set to -1 during CPU reset to mark the
I/O interrupt queue as empty.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
env->ext_index should be initialized to -1 to mark the external
interrupt queue as emtpy. This should not be done in s390_cpu_initfn
as all the interrupt fields are later reset to 0 by the memset in
s390_cpu_initial_reset or s390_cpu_full_reset. Move the initialization
there.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
In TCG mode we should store the CC value in env->cc_op. However do it
inconditionnaly because:
- the tcg_enabled function is not inlined
- it's probably faster to always store the value, especially given it
is likely in the same cache line than env->psw.mask.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
This remove the corresponding error messages in TCG mode, and allow to
simplify the s390_assign_subch_ioeventfd() function.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
The ioinst_schib_valid gets a SCHIB in guest endianness, we should
byteswap the fields we access.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
The I/O-Interruption Subclass field corresponds to bits 2 to 5 (BE
notation) of the Interruption-Identification Word. The value should
be shift by 27 instead of 24.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
We create optional sections with this patch. But we already have
optional subsections. Instead of having two mechanism that do the
same, we can just generalize it.
For subsections we just change:
- Add a needed function to VMStateDescription
- Remove VMStateSubsection (after removal of the needed function
it is just a VMStateDescription)
- Adjust the whole tree, moving the needed function to the corresponding
VMStateDescription
Signed-off-by: Juan Quintela <quintela@redhat.com>
Intercept the diag288 requests from kvm guests, and hand the
requested command to the diag288 watchdog device for further
handling.
Signed-off-by: Xu Wang <gesaint@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
commit 46c804def4 ("s390x: move fpu regs into a subsection
of the vmstate") moved the fprs into a subsection and bumped
the version number. This will allow to not transfer fprs in
the future if necessary. Add a comment to mark the return true
as intentional.
CC: Juan Quintela <quintela@redhat.com>
CC: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-Id: <1433758884-2997-1-git-send-email-borntraeger@de.ibm.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
We allocate ram_size / PAGE_SIZE storage keys, so we need to make sure that
we only access that many. Unfortunately the code can overrun this array by
one, potentially overwriting unrelated memory.
Fix it by limiting storage keys to their scope.
Signed-off-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
The MVC instruction and the memmove C funtion do not have the same
semantic when memory areas overlap:
MVC: When the operands overlap, the result is obtained as if the
operands were processed one byte at a time and each result byte were
stored immediately after fetching the necessary operand byte.
memmove: Copying takes place as though the bytes in src are first copied
into a temporary array that does not overlap src or dest, and the bytes
are then copied from the temporary array to dest.
The behaviour is therefore the same when the destination is at a lower
address than the source, but not in the other case. This is actually a
trick for propagating a value to an area. While the current code detects
that and call memset in that case, it only does for 1-byte value. This
trick can and is used for propagating two or more bytes to an area.
In the softmmu case, the call to mvc_fast_memmove is correct as the
above tests verify that source and destination are each within a page,
and both in a different page. The part doing the move 8 bytes by 8 bytes
is wrong and we need to check that if the source and destination
overlap, they do with a distance of minimum 8 bytes before copying 8
bytes at a time.
In the user code, we should check check that the destination is at a
lower address than source or than the end of the source is at a lower
address than the destination before calling memmove. In the opposite
case we fallback to the same code as the softmmu one. Note that l
represents (length - 1).
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
mvcp and mvcs helper get access to the physical memory by a call to
mmu_translate for the virtual to real conversion and then using ldb_phys
and stb_phys to physically access the data. In practice this is quite
slow because it bypasses the QEMU softmmu TLB and because stb_phys calls
try to invalidate the corresponding memory for each access.
Instead use cpu_ldb_{primary,secondary} for the loads and
cpu_stb_{primary,secondary} for the stores. Ideally this should be
further optimized by a call to memcpy, but that already improves the
boot time of a guest by a factor 1.8.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
s390_cpu_handle_mmu_fault currently looks at the current ASC mode
defined in PSW mask instead of the MMU index. This prevent emulating
easily instructions using a specific ASC mode. Fix that by using the
MMU index converted back to ASC using the just added cpu_mmu_idx_to_asc
function.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
Use constants to define the MMU indexes, and add a function to do
the reverse conversion of cpu_mmu_index.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
Besides RISBHG and RISBLG, all high-word instructions are not
implemented. Fix that.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
At the same time move the trap code from op_ct into gen_trap and use it
for all new functions. The value needs to be stored back to register
before the exception, but also before the brcond (as we don't use
temp locals). That's why we can't use wout helper.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
RISBGN is the same as RISBG, but without setting the condition code.
CLT and CLGT are the same as CLRT and CLGRT, but using memory for the
second operand.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
This complete the floating point support sign handling facility.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
It is part of the basic zArchitecture instructions.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
It is part of the basic zArchitecture instructions. Allow it to be call
from EXECUTE.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
This is needed to pass the gcc.c-torture/execute/ieee/20010114-2.c test
in the gcc testsuite.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
It belongs to the DFP rounding facility.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
STORE CLOCK FAST should be in the SCF facility.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
Change to match the PoP. In practice both format RIL-a and RIL-b have
the same fields. They differ on the way we decode the fields, and it's
done correctly in QEMU.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
The COMPARE LOGICAL IMMEDIATE AND TRAP instruction should compare the
numbers as unsigned, as its name implies.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
When an operation code is not recognized (ie invalid instruction) an
operation exception should be generated instead of a specification
exception. The latter is for valid opcode, with invalid operands or
modifiers.
This give a very basic GDB support in the guest, as it uses the invalid
opcode 0x0001 to generate a trap.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
This complete the general-instructions-extension facility, enable it.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
[agraf: remove facility bit]
Signed-off-by: Alexander Graf <agraf@suse.de>
LY is part of the long-displacement facility.
RISBHG and RISBLG are part of the high-word facility.
STCMH is part of the z/Architecture.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
The s390x floating point unit detects tininess before rounding, so set
the softfloat fp_status up appropriately.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
LOAD LENGTHENED and LOAD ROUNDED are considered as FP operations and
thus need to convert input sNaN into corresponding qNaN.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
The cpu_mmu_index function wrongly looks at PSW P bit to determine the
MMU index, while this bit actually only control the use of priviledge
instructions. The addressing mode is detected by looking at the PSW ASC
bits instead.
This used to work more or less correctly up to kernel 3.6 as the kernel
was running in primary space and userland in secondary space. Since
kernel 3.7 the default is to run the kernel in home space and userland
in primary space. While the current QEMU code seems to work it open some
security issues, like accessing the lowcore memory in R/W mode from a
userspace process once it has been accessed by the kernel (it is then
cached by the QEMU TLB).
At the same time change the MMU_USER_IDX value so that it matches the
value used in recent kernels.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
runtime_exception computes the psw.addr value using the actual exception
address and the instruction length computed by calling the get_ilen
function. However as explained above the get_ilen code, it returns the
actual instruction length, and not the ILC. Therefore there is no need to
multiply the value by 2.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>