Commit Graph

4542 Commits

Author SHA1 Message Date
Vitaly Kuznetsov
30d6ff662d i386/kvm: add NoNonArchitecturalCoreSharing Hyper-V enlightenment
Hyper-V TLFS specifies this enlightenment as:
"NoNonArchitecturalCoreSharing - Indicates that a virtual processor will never
share a physical core with another virtual processor, except for virtual
processors that are reported as sibling SMT threads. This can be used as an
optimization to avoid the performance overhead of STIBP".

However, STIBP is not the only implication. It was found that Hyper-V on
KVM doesn't pass MD_CLEAR bit to its guests if it doesn't see
NoNonArchitecturalCoreSharing bit.

KVM reports NoNonArchitecturalCoreSharing in KVM_GET_SUPPORTED_HV_CPUID to
indicate that SMT on the host is impossible (not supported of forcefully
disabled).

Implement NoNonArchitecturalCoreSharing support in QEMU as tristate:
'off' - the feature is disabled (default)
'on' - the feature is enabled. This is only safe if vCPUS are properly
 pinned and correct topology is exposed. As CPU pinning is done outside
 of QEMU the enablement decision will be made on a higher level.
'auto' - copy KVM setting. As during live migration SMT settings on the
source and destination host may differ this requires us to add a migration
blocker.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20191018163908.10246-1-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-10-22 09:38:42 +02:00
Mario Smarduch
73284563dc target/i386: log MCE guest and host addresses
Patch logs MCE AO, AR messages injected to guest or taken by QEMU itself.
We print the QEMU address for guest MCEs, helps on hypervisors that have
another source of MCE logging like mce log, and when they go missing.

For example we found these QEMU logs:

September 26th 2019, 17:36:02.309        Droplet-153258224: Guest MCE Memory Error at qemu addr 0x7f8ce14f5000 and guest 3d6f5000 addr of type BUS_MCEERR_AR injected   qemu-system-x86_64      amsN    ams3nodeNNNN

September 27th 2019, 06:25:03.234        Droplet-153258224: Guest MCE Memory Error at qemu addr 0x7f8ce14f5000 and guest 3d6f5000 addr of type BUS_MCEERR_AR injected   qemu-system-x86_64      amsN    ams3nodeNNNN

The first log had a corresponding mce log entry, the second didnt (logging
thresholds) we can infer from second entry same PA and mce type.

Signed-off-by: Mario Smarduch <msmarduch@digitalocean.com>
Message-Id: <20191009164459.8209-3-msmarduch@digitalocean.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-10-22 09:38:38 +02:00
David Hildenbrand
de60a92ea7 s390x/kvm: Set default cpu model for all machine classes
We have to set the default model of all machine classes, not just for
the active one. Otherwise, "query-machines" will indicate the wrong
CPU model ("qemu-s390x-cpu" instead of "host-s390x-cpu") as
"default-cpu-type".

Doing a
    {"execute":"query-machines"}
under KVM now results in
    {"return": [
        {
            "hotpluggable-cpus": true,
            "name": "s390-ccw-virtio-4.0",
            "numa-mem-supported": false,
            "default-cpu-type": "host-s390x-cpu",
            "cpu-max": 248,
            "deprecated": false},
        {
            "hotpluggable-cpus": true,
            "name": "s390-ccw-virtio-2.7",
            "numa-mem-supported": false,
            "default-cpu-type": "host-s390x-cpu",
            "cpu-max": 248,
            "deprecated": false
        } ...

Libvirt probes all machines via "-machine none,accel=kvm:tcg" and will
currently see the wrong CPU model under KVM.

Reported-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Fixes: b6805e127c ("s390x: use generic cpu_model parsing")
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20191021100515.6978-1-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-10-21 18:03:08 +02:00
David Hildenbrand
38ad4fa3de s390x/tcg: Fix VECTOR SUBTRACT WITH BORROW COMPUTE BORROW INDICATION
The numbers are unsigned, the computation is wrong. "Each operand is
treated as an unsigned binary integer".
Let's implement as given in the PoP:

"A subtraction is performed by adding the contents of the second operand
 with the bitwise complement of the third operand along with a borrow
 indication from the rightmost bit of the fourth operand."

Reuse gen_accc2_i64().

Fixes: bc725e6515 ("s390x/tcg: Implement VECTOR SUBTRACT WITH BORROW COMPUTE BORROW INDICATION")
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20191021085715.3797-7-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-10-21 17:34:43 +02:00
David Hildenbrand
2cb8a68d37 s390x/tcg: Fix VECTOR SUBTRACT WITH BORROW INDICATION
Testing this, there seems to be something messed up. We are dealing with
unsigned numbers. "Each operand is treated as an unsigned binary integer."
Let's just implement as written in the PoP:

"A subtraction is performed by adding the contents of
 the second operand with the bitwise complement of
 the third operand along with a borrow indication from
 the rightmost bit position of the fourth operand and
 the result is placed in the first operand."

We can reuse gen_ac2_i64().

Fixes: 48390a7c27 ("s390x/tcg: Implement VECTOR SUBTRACT WITH BORROW INDICATION")
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20191021085715.3797-6-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-10-21 17:34:12 +02:00
David Hildenbrand
23e797749f s390x/tcg: Fix VECTOR SUBTRACT COMPUTE BORROW INDICATION
Looks like my idea of what a "borrow" is was wrong. The PoP says:

 "If the resulting subtraction results in a carry out of bit zero, a value
 of one is placed in the corresponding element of the first operand;
 otherwise, a value of zero is placed in the corresponding element"

As clarified by Richard, all we have to do is invert the result.

Fixes: 1ee2d7ba72 ("s390x/tcg: Implement VECTOR SUBTRACT COMPUTE BORROW INDICATION")
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20191021085715.3797-5-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-10-21 17:33:29 +02:00
David Hildenbrand
b57b336876 s390x/tcg: Fix VECTOR SHIFT RIGHT ARITHMETIC BY BYTE
We forgot to propagate the highest bit accross the high doubleword in
two cases (shift >=64).

Fixes: 5f724887e3 ("s390x/tcg: Implement VECTOR SHIFT RIGHT ARITHMETIC")
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20191021085715.3797-4-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-10-21 17:33:13 +02:00
David Hildenbrand
8b95251947 s390x/tcg: Fix VECTOR MULTIPLY AND ADD *
We missed that we always read a "double-wide even-odd element
pair of the fourth operand". Fix it in all four variants.

Fixes: 1b430aec41 ("s390x/tcg: Implement VECTOR MULTIPLY AND ADD *")
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20191021085715.3797-3-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-10-21 17:33:02 +02:00
David Hildenbrand
49a7ce4e03 s390x/tcg: Fix VECTOR MULTIPLY LOGICAL ODD
We have to read from odd offsets.

Fixes: 2bf3ee38f1 ("s390x/tcg: Implement VECTOR MULTIPLY *")
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20191021085715.3797-2-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-10-21 17:32:49 +02:00
David Hildenbrand
8064af6b1d s390x/mmu: Remove duplicate check for MMU_DATA_STORE
No need to double-check if we have a write.

Found by Coverity (CID: 1406404).

Fixes: 31b5941906 ("target/s390x: Return exception from mmu_translate_real")
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20191017121922.18840-1-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-10-21 17:30:06 +02:00
Andrew Jones
be39110d4c s390x/cpumodel: Add missing visit_free
Beata Michalska noticed this missing visit_free() while reviewing
arm's implementation of qmp_query_cpu_model_expansion(), which is
modeled off this s390x implementation.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Message-Id: <20191016145434.7007-1-drjones@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-10-21 17:30:06 +02:00
Max Filippov
d5eaec84e5 target/xtensa: regenerate and re-import test_mmuhifi_c3 core
Overlay part of the test_mmuhifi_c3 core has GPL3 copyright headers in
it. Fix that by regenerating test_mmuhifi_c3 core overlay and
re-importing it.

Fixes: d848ea7767 ("target/xtensa: add test_mmuhifi_c3 core")
Reported-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2019-10-18 19:54:27 -07:00
Xiaoyao Li
69edb0f37a target/i386: Add Snowridge-v2 (no MPX) CPU model
Add new version of Snowridge CPU model that removes MPX feature.

MPX support is being phased out by Intel. GCC has dropped it, Linux kernel
and KVM are also going to do that in the future.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Message-Id: <20191012024748.127135-1-xiaoyao.li@intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2019-10-15 18:34:44 -03:00
Eduardo Habkost
af95cafb87 i386: Omit all-zeroes entries from KVM CPUID table
KVM has a 80-entry limit at KVM_SET_CPUID2.  With the
introduction of CPUID[0x1F], it is now possible to hit this limit
with unusual CPU configurations, e.g.:

  $ ./x86_64-softmmu/qemu-system-x86_64 \
    -smp 1,dies=2,maxcpus=2 \
    -cpu EPYC,check=off,enforce=off \
    -machine accel=kvm
  qemu-system-x86_64: kvm_init_vcpu failed: Argument list too long

This happens because QEMU adds a lot of all-zeroes CPUID entries
for unused CPUID leaves.  In the example above, we end up
creating 48 all-zeroes CPUID entries.

KVM already returns all-zeroes when emulating the CPUID
instruction if an entry is missing, so the all-zeroes entries are
redundant.  Skip those entries.  This reduces the CPUID table
size by half while keeping CPUID output unchanged.

Reported-by: Yumei Huang <yuhuang@redhat.com>
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1741508
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20190822225210.32541-1-ehabkost@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2019-10-15 18:34:44 -03:00
Bingsong Si
76ecd7a514 i386: Fix legacy guest with xsave panic on host kvm without update cpuid.
without kvm commit 412a3c41, CPUID(EAX=0xd,ECX=0).EBX always equal to 0 even
through guest update xcr0, this will crash legacy guest(e.g., CentOS 6).
Below is the call trace on the guest.

[    0.000000] kernel BUG at mm/bootmem.c:469!
[    0.000000] invalid opcode: 0000 [#1] SMP
[    0.000000] last sysfs file:
[    0.000000] CPU 0
[    0.000000] Modules linked in:
[    0.000000]
[    0.000000] Pid: 0, comm: swapper Tainted: G           --------------- H  2.6.32-279#2 Red Hat KVM
[    0.000000] RIP: 0010:[<ffffffff81c4edc4>]  [<ffffffff81c4edc4>] alloc_bootmem_core+0x7b/0x29e
[    0.000000] RSP: 0018:ffffffff81a01cd8  EFLAGS: 00010046
[    0.000000] RAX: ffffffff81cb1748 RBX: ffffffff81cb1720 RCX: 0000000001000000
[    0.000000] RDX: 0000000000000040 RSI: 0000000000000000 RDI: ffffffff81cb1720
[    0.000000] RBP: ffffffff81a01d38 R08: 0000000000000000 R09: 0000000000001000
[    0.000000] R10: 02008921da802087 R11: 00000000ffff8800 R12: 0000000000000000
[    0.000000] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000001000000
[    0.000000] FS:  0000000000000000(0000) GS:ffff880002200000(0000) knlGS:0000000000000000
[    0.000000] CS:  0010 DS: 0018 ES: 0018 CR0: 0000000080050033
[    0.000000] CR2: 0000000000000000 CR3: 0000000001a85000 CR4: 00000000001406b0
[    0.000000] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[    0.000000] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[    0.000000] Process swapper (pid: 0, threadinfo ffffffff81a00000, task ffffffff81a8d020)
[    0.000000] Stack:
[    0.000000]  0000000000000002 81a01dd881eaf060 000000007e5fe227 0000000000001001
[    0.000000] <d> 0000000000000040 0000000000000001 0000006cffffffff 0000000001000000
[    0.000000] <d> ffffffff81cb1720 0000000000000000 0000000000000000 0000000000000000
[    0.000000] Call Trace:
[    0.000000]  [<ffffffff81c4f074>] ___alloc_bootmem_nopanic+0x8d/0xca
[    0.000000]  [<ffffffff81c4f0cf>] ___alloc_bootmem+0x11/0x39
[    0.000000]  [<ffffffff81c4f172>] __alloc_bootmem+0xb/0xd
[    0.000000]  [<ffffffff814d42d9>] xsave_cntxt_init+0x249/0x2c0
[    0.000000]  [<ffffffff814e0689>] init_thread_xstate+0x17/0x25
[    0.000000]  [<ffffffff814e0710>] fpu_init+0x79/0xaa
[    0.000000]  [<ffffffff814e27e3>] cpu_init+0x301/0x344
[    0.000000]  [<ffffffff81276395>] ? sort+0x155/0x230
[    0.000000]  [<ffffffff81c30cf2>] trap_init+0x24e/0x25f
[    0.000000]  [<ffffffff81c2bd73>] start_kernel+0x21c/0x430
[    0.000000]  [<ffffffff81c2b33a>] x86_64_start_reservations+0x125/0x129
[    0.000000]  [<ffffffff81c2b438>] x86_64_start_kernel+0xfa/0x109
[    0.000000] Code: 03 48 89 f1 49 c1 e8 0c 48 0f af d0 48 c7 c6 00 a6 61 81 48 c7 c7 00 e5 79 81 31 c0 4c 89 74 24 08 e8 f2 d7 89 ff 4d 85 e4 75 04 <0f> 0b eb fe 48 8b 45 c0 48 83 e8 01 48 85 45
c0 74 04 0f 0b eb

Signed-off-by: Bingsong Si <owen.si@ucloud.cn>
Message-Id: <20190822042901.16858-1-owen.si@ucloud.cn>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2019-10-15 18:34:44 -03:00
Tao Xu
e7694a5eae target/i386: drop the duplicated definition of cpuid AVX512_VBMI macro
Drop the duplicated definition of cpuid AVX512_VBMI macro and rename
it as CPUID_7_0_ECX_AVX512_VBMI. Rename CPUID_7_0_ECX_VBMI2 as
CPUID_7_0_ECX_AVX512_VBMI2.

Acked-by: Stefano Garzarella <sgarzare@redhat.com>
Signed-off-by: Tao Xu <tao3.xu@intel.com>
Message-Id: <20190926021055.6970-3-tao3.xu@intel.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2019-10-15 18:34:44 -03:00
Tao Xu
f2be0bebb6 target/i386: clean up comments over 80 chars per line
Add some comments, clean up comments over 80 chars per line. And there
is an extra line in comment of CPUID_8000_0008_EBX_WBNOINVD, remove
the extra enter and spaces.

Acked-by: Stefano Garzarella <sgarzare@redhat.com>
Signed-off-by: Tao Xu <tao3.xu@intel.com>
Message-Id: <20190926021055.6970-2-tao3.xu@intel.com>
[ehabkost: rebase to latest git master]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2019-10-15 18:34:33 -03:00
Peter Maydell
6ee1864377 target/arm/arm-semi: Implement SH_EXT_STDOUT_STDERR extension
SH_EXT_STDOUT_STDERR is a v2.0 semihosting extension: the guest
can open ":tt" with a file mode requesting append access in
order to open stderr, in addition to the existing "open for
read for stdin or write for stdout". Implement this and
report it via the :semihosting-features data.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20190916141544.17540-16-peter.maydell@linaro.org
2019-10-15 18:09:04 +01:00
Peter Maydell
22a43bb9ab target/arm/arm-semi: Implement SH_EXT_EXIT_EXTENDED extension
SH_EXT_EXIT_EXTENDED is a v2.0 semihosting extension: it
indicates that the implementation supports the SYS_EXIT_EXTENDED
function. This function allows both A64 and A32/T32 guests to
exit with a specified exit status, unlike the older SYS_EXIT
function which only allowed this for A64 guests. Implement
this extension.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20190916141544.17540-15-peter.maydell@linaro.org
2019-10-15 18:09:03 +01:00
Peter Maydell
c46a653c3a target/arm/arm-semi: Implement support for semihosting feature detection
Version 2.0 of the semihosting specification added support for
allowing a guest to detect whether the implementation supported
particular features. This works by the guest opening a magic
file ":semihosting-features", which contains a fixed set of
data with some magic numbers followed by a sequence of bytes
with feature flags. The file is expected to behave sensibly
for the various semihosting calls which operate on files
(SYS_FLEN, SYS_SEEK, etc).

Implement this as another kind of guest FD using our function
table dispatch mechanism. Initially we report no extended
features, so we have just one feature flag byte which is zero.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20190916141544.17540-14-peter.maydell@linaro.org
2019-10-15 18:09:03 +01:00
Peter Maydell
1631a7be3a target/arm/arm-semi: Factor out implementation of SYS_FLEN
Factor out the implementation of SYS_FLEN via the new
function tables.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190916141544.17540-13-peter.maydell@linaro.org
2019-10-15 18:09:03 +01:00
Peter Maydell
45e88ffc76 target/arm/arm-semi: Factor out implementation of SYS_SEEK
Factor out the implementation of SYS_SEEK via the new function
tables.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190916141544.17540-12-peter.maydell@linaro.org
2019-10-15 18:09:03 +01:00
Peter Maydell
0213fa452f target/arm/arm-semi: Factor out implementation of SYS_ISTTY
Factor out the implementation of SYS_ISTTY via the new function
tables.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190916141544.17540-11-peter.maydell@linaro.org
2019-10-15 18:09:03 +01:00
Peter Maydell
2c3a09a620 target/arm/arm-semi: Factor out implementation of SYS_READ
Factor out the implementation of SYS_READ via the
new function tables.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190916141544.17540-10-peter.maydell@linaro.org
2019-10-15 18:09:03 +01:00
Peter Maydell
52c8a163c1 target/arm/arm-semi: Factor out implementation of SYS_WRITE
Factor out the implementation of SYS_WRITE via the
new function tables.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190916141544.17540-9-peter.maydell@linaro.org
2019-10-15 18:09:03 +01:00
Peter Maydell
263eb621de target/arm/arm-semi: Factor out implementation of SYS_CLOSE
Currently for the semihosting calls which take a file descriptor
(SYS_CLOSE, SYS_WRITE, SYS_READ, SYS_ISTTY, SYS_SEEK, SYS_FLEN)
we have effectively two implementations, one for real host files
and one for when we indirect via the gdbstub. We want to add a
third one to deal with the magic :semihosting-features file.

Instead of having a three-way if statement in each of these
cases, factor out the implementation of the calls to separate
functions which we dispatch to via function pointers selected
via the GuestFDType for the guest fd.

In this commit, we set up the framework for the dispatch,
and convert the SYS_CLOSE call to use it.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190916141544.17540-8-peter.maydell@linaro.org
2019-10-15 18:09:03 +01:00
Peter Maydell
939f5b4331 target/arm/arm-semi: Use set_swi_errno() in gdbstub callback functions
When we are routing semihosting operations through the gdbstub, the
work of sorting out the return value and setting errno if necessary
is done by callback functions which are invoked by the gdbstub code.
Clean up some ifdeffery in those functions by having them call
set_swi_errno() to set the semihosting errno.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190916141544.17540-7-peter.maydell@linaro.org
2019-10-15 18:09:03 +01:00
Peter Maydell
6ed6845532 target/arm/arm-semi: Restrict use of TaskState*
The semihosting code needs accuss to the linux-user only
TaskState pointer so it can set the semihosting errno per-thread
for linux-user mode. At the moment we do this by having some
ifdefs so that we define a 'ts' local in do_arm_semihosting()
which is either a real TaskState * or just a CPUARMState *,
depending on which mode we're compiling for.

This is awkward if we want to refactor do_arm_semihosting()
into other functions which might need to be passed the TaskState.
Restrict usage of the TaskState local by:
 * making set_swi_errno() always take the CPUARMState pointer
   and (for the linux-user version) get TaskState from that
 * creating a new get_swi_errno() which reads the errno
 * having the two semihosting calls which need the TaskState
   for other purposes (SYS_GET_CMDLINE and SYS_HEAPINFO)
   define a variable with scope restricted to just that code

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190916141544.17540-6-peter.maydell@linaro.org
2019-10-15 18:09:03 +01:00
Peter Maydell
35e9a0a8ce target/arm/arm-semi: Make semihosting code hand out its own file descriptors
Currently the Arm semihosting code returns the guest file descriptors
(handles) which are simply the fd values from the host OS or the
remote gdbstub. Part of the semihosting 2.0 specification requires
that we implement special handling of opening a ":semihosting-features"
filename. Guest fds which result from opening the special file
won't correspond to host fds, so to ensure that we don't end up
with duplicate fds we need to have QEMU code control the allocation
of the fd values we give the guest.

Add in an abstraction layer which lets us allocate new guest FD
values, and translate from a guest FD value back to the host one.
This also fixes an odd hole where a semihosting guest could
use the semihosting API to read, write or close file descriptors
that it had never allocated but which were being used by QEMU itself.
(This isn't a security hole, because enabling semihosting permits
the guest to do arbitrary file access to the whole host filesystem,
and so should only be done if the guest is completely trusted.)

Currently the only kind of guest fd is one which maps to a
host fd, but in a following commit we will add one which maps
to the :semihosting-features magic data.

If the guest is migrated with an open semihosting file descriptor
then subsequent attempts to use the fd will all fail; this is
not a change from the previous situation (where the host fd
being used on the source end would not be re-opened on the
destination end).

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190916141544.17540-5-peter.maydell@linaro.org
2019-10-15 18:09:03 +01:00
Peter Maydell
f8ad2306d1 target/arm/arm-semi: Correct comment about gdb syscall races
In arm_gdb_syscall() we have a comment suggesting a race
because the syscall completion callback might not happen
before the gdb_do_syscallv() call returns. The comment is
correct that the callback may not happen but incorrect about
the effects. Correct it and note the important caveat that
callers must never do any work of any kind after return from
arm_gdb_syscall() that depends on its return value.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190916141544.17540-4-peter.maydell@linaro.org
2019-10-15 18:09:03 +01:00
Peter Maydell
f7d38cf2d0 target/arm/arm-semi: Always set some kind of errno for failed calls
If we fail a semihosting call we should always set the
semihosting errno to something; we were failing to do
this for some of the "check inputs for sanity" cases.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190916141544.17540-3-peter.maydell@linaro.org
2019-10-15 18:09:03 +01:00
Peter Maydell
1b003821d4 target/arm/arm-semi: Capture errno in softmmu version of set_swi_errno()
The set_swi_errno() function is called to capture the errno
from a host system call, so that we can return -1 from the
semihosting function and later allow the guest to get a more
specific error code with the SYS_ERRNO function. It comes in
two versions, one for user-only and one for softmmu. We forgot
to capture the errno in the softmmu version; fix the error.

(Semihosting calls directed to gdb are unaffected because
they go through a different code path that captures the
error return from the gdbstub call in arm_semi_cb() or
arm_semi_flen_cb().)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190916141544.17540-2-peter.maydell@linaro.org
2019-10-15 18:09:03 +01:00
Eric Auger
fff9f5558d ARM: KVM: Check KVM_CAP_ARM_IRQ_LINE_LAYOUT_2 for smp_cpus > 256
Host kernel within [4.18, 5.3] report an erroneous KVM_MAX_VCPUS=512
for ARM. The actual capability to instantiate more than 256 vcpus
was fixed in 5.4 with the upgrade of the KVM_IRQ_LINE ABI to support
vcpu id encoded on 12 bits instead of 8 and a redistributor consuming
a single KVM IO device instead of 2.

So let's check this capability when attempting to use more than 256
vcpus within any ARM kvm accelerated machine.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Message-id: 20191003154640.22451-4-eric.auger@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-15 18:09:02 +01:00
Eric Auger
f6530926e2 intc/arm_gic: Support IRQ injection for more than 256 vpus
Host kernels that expose the KVM_CAP_ARM_IRQ_LINE_LAYOUT_2 capability
allow injection of interrupts along with vcpu ids larger than 255.
Let's encode the vpcu id on 12 bits according to the upgraded KVM_IRQ_LINE
ABI when needed.

Given that we have two callsites that need to assemble
the value for kvm_set_irq(), a new helper routine, kvm_arm_set_irq
is introduced.

Without that patch qemu exits with "kvm_set_irq: Invalid argument"
message.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reported-by: Zenghui Yu <yuzenghui@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Message-id: 20191003154640.22451-3-eric.auger@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-15 18:09:02 +01:00
David Hildenbrand
1f6493be08 s390x/tcg: MVCL: Exit to main loop if requested
MVCL is interruptible and we should check for interrupts and process
them after writing back the variables to the registers. Let's check
for any exit requests and exit to the main loop. Introduce a new helper
function for that: cpu_loop_exit_requested().

When booting Fedora 30, I can see a handful of these exits and it seems
to work reliable. Also, Richard explained why this works correctly even
when MVCL is called via EXECUTE:

    (1) TB with EXECUTE runs, at address Ae
        - env->psw_addr stored with Ae.
        - helper_ex() runs, memory address Am computed
          from D2a(X2a,B2a) or from psw.addr+RI2.
        - env->ex_value stored with memory value modified by R1a

    (2) TB of executee runs,
        - env->ex_value stored with 0.
        - helper_mvcl() runs, using and updating R1b, R1b+1, R2b, R2b+1.

    (3a) helper_mvcl() completes,
         - TB of executee continues, psw.addr += ilen.
         - Next instruction is the one following EXECUTE.

    (3b) helper_mvcl() exits to main loop,
         - cpu_loop_exit_restore() unwinds psw.addr = Ae.
         - Next instruction is the EXECUTE itself...
         - goto 1.

As the PoP mentiones that an interruptible instruction called via EXECUTE
should avoid modifying storage/registers that are used by EXECUTE itself,
it is fine to retrigger EXECUTE.

Cc: Alex Bennée <alex.bennee@linaro.org>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-10 12:27:15 +02:00
Richard Henderson
1cccdef3e3 target/s390x: Remove ILEN_UNWIND
This setting is no longer used.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-19-richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:02 +02:00
Richard Henderson
5c58704b07 target/s390x: Remove ilen argument from trigger_pgm_exception
All but one caller passes ILEN_UNWIND, which is not stored.
For the one use case in s390_cpu_tlb_fill, set int_pgm_ilen
directly, simply to avoid the assert within do_program_interrupt.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-18-richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:02 +02:00
Richard Henderson
f5cbdc4397 target/s390x: Remove ilen argument from trigger_access_exception
The single caller passes ILEN_UNWIND; pass that along to
trigger_pgm_exception directly.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-17-richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:02 +02:00
Richard Henderson
20e1372b7c target/s390x: Remove ILEN_AUTO
This setting is no longer used.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-16-richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:02 +02:00
Richard Henderson
2550953b20 target/s390x: Rely on unwinding in s390_cpu_virt_mem_rw
For TCG, we will always call s390_cpu_virt_mem_handle_exc,
which will go through the unwinder to set ILEN.  For KVM,
we do not go through do_program_interrupt, so this argument
is unused.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-15-richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:02 +02:00
Richard Henderson
9e1dae315f target/s390x: Rely on unwinding in s390_cpu_tlb_fill
We currently set ilen to AUTO, then overwrite that during
unwinding, then overwrite that for the code access case.

This can be simplified to setting ilen to our arbitrary
value for the (undefined) code access case, then rely on
unwinding to overwrite that with the correct value for
the data access case.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-14-richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
Richard Henderson
9accc852d8 target/s390x: Simplify helper_lra
We currently call trigger_pgm_exception to set cs->exception_index
and env->int_pgm_code and then read the values back and then
reset cs->exception_index so that the exception is not delivered.

Instead, use the exception type that we already have directly
without ever triggering an exception that must be suppressed.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-13-richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
Richard Henderson
42007b1982 target/s390x: Remove fail variable from s390_cpu_tlb_fill
Now that excp always contains a real exception number, we can
use that instead of a separate fail variable.  This allows a
redundant test to be removed.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-12-richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
Richard Henderson
a79d225335 target/s390x: Return exception from translate_pages
Do not raise the exception directly within translate_pages,
but pass it back so that caller may do so.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-11-richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
Richard Henderson
ce7ac79d28 target/s390x: Return exception from mmu_translate
Do not raise the exception directly within mmu_translate,
but pass it back so that caller may do so.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-10-richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
Richard Henderson
c7363b28ff target/s390x: Remove exc argument to mmu_translate_asce
Now that mmu_translate_asce returns the exception instead of
raising it, the argument is unused.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-9-richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
Richard Henderson
31b5941906 target/s390x: Return exception from mmu_translate_real
Do not raise the exception directly within mmu_translate_real,
but pass it back so that caller may do so.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-8-richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
Richard Henderson
1ab3302886 target/s390x: Handle tec in s390_cpu_tlb_fill
As a step toward moving all excption handling out of mmu_translate,
copy handling of the LowCore tec value from trigger_access_exception
into s390_cpu_tlb_fill.  So far this new plumbing isn't used.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-7-richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
Richard Henderson
18ab936d00 target/s390x: Push trigger_pgm_exception lower in s390_cpu_tlb_fill
Delay triggering an exception until the end, after we have
determined ultimate success or failure, and also taken into
account whether this is a non-faulting probe.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-6-richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
Richard Henderson
1e36aee636 target/s390x: Use tcg_s390_program_interrupt in TCG helpers
Replace all uses of s390_program_interrupt within files
that are marked CONFIG_TCG.  These are necessarily tcg-only.

This lets each of these users benefit from the QEMU_NORETURN
attribute on tcg_s390_program_interrupt.

Acked-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-5-richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
Richard Henderson
77b703f84f target/s390x: Remove ilen parameter from s390_program_interrupt
This is no longer used, and many of the existing uses -- particularly
within hw/s390x -- seem questionable.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-4-richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
Richard Henderson
3e20185892 target/s390x: Remove ilen parameter from tcg_s390_program_interrupt
Since we begin the operation with an unwind, we have the proper
value of ilen immediately available.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191001171614.8405-3-richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
Richard Henderson
c87ff4d108 target/s390x: Add ilen to unwind data
Use ILEN_UNWIND to signal that we have in fact that cpu_restore_state
will have been called by the time we arrive in do_program_interrupt.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20191001171614.8405-2-richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
David Hildenbrand
b580b6ee05 s390x/cpumodel: Add new TCG features to QEMU cpu model
We now implement a bunch of new facilities we can properly indicate.

ESOP-1/ESOP-2 handling is discussed in the PoP Chafter 3-15
("Suppression on Protection"). The "Basic suppression-on-protection (SOP)
facility" is a core part of z/Architecture without a facility
indication. ESOP-2 is indicated by ESOP-1 + Side-effect facility
("ESOP-2"). Besides ESOP-2, the side-effect facility is only relevant for
the guarded-storage facility (we don't implement).

S390_ESOP:
- We indicate DAT exeptions by setting bit 61 of the TEID (TEC) to 1 and
  bit 60 to zero. We don't trigger ALCP exceptions yet. Also, we set
  bit 0-51 and bit 62/63 to the right values.
S390_ACCESS_EXCEPTION_FS_INDICATION:
- The TEID (TEC) properly indicates in bit 52/53 on any access if it was
  a fetch or a store
S390_SIDE_EFFECT_ACCESS_ESOP2:
- We have no side-effect accesses (esp., we don't implement the
  guarded-storage faciliy), we correctly set bit 64 of the TEID (TEC) to
  0 (no side-effect).
- ESOP2: We properly set bit 56, 60, 61 in the TEID (TEC) to indicate the
  type of protection. We don't trigger KCP/ALCP exceptions yet.
S390_INSTRUCTION_EXEC_PROT:
- The MMU properly detects and indicates the exception on instruction fetches
- Protected TLB entries will never get PAGE_EXEC set.

There is no need to fake the abscence of any of the facilities - without
the facilities, some bits of the TEID (TEC) are simply unpredictable.

As IEP was added with z14 and we currently implement a z13, add it to
the MAX model instead.

Acked-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
David Hildenbrand
faa40177bb s390x/cpumodel: Prepare for changes of QEMU model
Setup the 4.1 compatibility model so we can add new features to the
LATEST model.

Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
David Hildenbrand
3a06f98192 s390x/mmu: Implement Instruction-Execution-Protection Facility
IEP support in the mmu is fairly easy. Set the right permissions for TLB
entries and properly report an exception.

Make sure to handle EDAT-2 by setting bit 56/60/61 of the TEID (TEC) to
the right values.

Let's keep s390_cpu_get_phys_page_debug() working even if IEP is
active. Switch MMU_DATA_LOAD - this has no other effects any more as the
ASC to be used is now fully selected outside of mmu_translate().

Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
David Hildenbrand
3dc29061f3 s390x/mmu: Implement ESOP-2 and access-exception-fetch/store-indication facility
We already implement ESOP-1. For ESOP-2, we only have to indicate all
protection exceptions properly. Due to EDAT-1, we already indicate DAT
exceptions properly. We don't trigger KCP/ALCP/IEP exceptions yet.

So all we have to do is set the TEID (TEC) to the right values
(bit 56, 60, 61) in case of LAP.

We don't have any side-effects (e.g., no guarded-storage facility),
therefore, bit 64 of the TEID (TEC) is always 0.

We always have to indicate whether it is a fetch or a store for all access
exceptions. This is only missing for LAP exceptions.

Acked-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
David Hildenbrand
90790898a1 s390x/mmu: Add EDAT2 translation support
This only adds basic support to the DAT translation, but no EDAT2 support
for TCG. E.g., the gdbstub under kvm uses this function, too, to
translate virtual addresses.

Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
David Hildenbrand
a4e95b41a1 s390x/mmu: Convert to non-recursive page table walk
A non-recursive implementation allows to make better use of the
branch predictor, avoids function calls, and makes the implementation of
new features only for a subset of region table levels easier.

We can now directly compare our implementation to the KVM gaccess
implementation in arch/s390/kvm/gaccess.c:guest_translate().

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:49:01 +02:00
David Hildenbrand
3fd0e85f3f s390x/mmu: DAT table definition overhaul
Let's use consistent names for the region/section/page table entries and
for the macros to extract relevant parts from virtual address. Make them
match the definitions in the PoP - e.g., how the relevant bits are actually
called.

Introduce defines for all bits declared in the PoP. This will come in
handy in follow-up patches.

Add a note where additional information about s390x and the used
definitions can be found.

Acked-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:48:46 +02:00
David Hildenbrand
ae6d48d43f s390x/mmu: Use TARGET_PAGE_MASK in mmu_translate_pte()
While ASCE_ORIGIN is not wrong, it is certainly confusing. We want a
page frame address.

Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:33:47 +02:00
David Hildenbrand
2ed0cd7cd7 s390x/mmu: Inject PGM_ADDRESSING on bogus table addresses
Let's document how it works and inject PGM_ADDRESSING if reading of
table entries fails.

Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:33:47 +02:00
David Hildenbrand
81d7e3bc45 s390x/mmu: Inject DAT exceptions from a single place
Let's return the PGM from the translation functions on error and inject
based on that.

Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:33:47 +02:00
David Hildenbrand
124ada6810 s390x/mmu: Move DAT protection handling out of mmu_translate_asce()
We'll reuse the ilen and tec definitions in mmu_translate
soon also for all other DAT exceptions we inject. Move it to the caller,
where we can later pair it up with other protection checks, like IEP.

Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:33:47 +02:00
David Hildenbrand
a780d096e6 s390x/mmu: Drop debug logging from MMU code
Let's get it out of the way to make some further refactorings easier.
Personally, I've never used these debug statements at all. And if I had
to debug issues, I used plain GDB instead (debug prints are just way too
much noise in the MMU). We might want to introduce tracing at some point
instead, so we can able selected events on demand.

Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-09 12:33:47 +02:00
Peter Maydell
0f0b43868a ppc patch queue 2019-10-04
Here's the next batch of ppc and spapr patches.  Includes:
   * Fist part of a large cleanup to irq infrastructure
   * Recreate the full FDT at CAS time, instead of making a difficult
     to follow set of updates.  This will help us move towards
     eliminating CAS reboots altogether
   * No longer provide RTAS blob to SLOF - SLOF can include it just as
     well itself, since guests will generally need to relocate it with
     a call to instantiate-rtas
   * A number of DFP fixes and cleanups from Mark Cave-Ayland
   * Assorted bugfixes
   * Several new small devices for powernv
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAl2XEn0ACgkQbDjKyiDZ
 s5I6bA/7B5sjY/QxuE8axm5KupoAnE8zf205hN8mbYASwtDfFwgaeNreVaOSJUpr
 fgcx/g9G3rAryGZv3O6i02+wcRgNw1DnJ3ynCthIrExZEcfbTYJiS4s9apwPEQy8
 HFmBNdPDqrhFI0aFvXEUauiOp1aapPUUklm34eFscs94lJXxphRUEfa3XT5uEhUh
 xrIZwYq20A+ih4UHwk3Onyx/cvFpl6BRB2nVEllQFqzwF5eTTfz9t8+JGTebxD/7
 8qqt8ti0KM3wxSDTQnmyMUmpgy+C1iCvNYvv6nWFg+07QuGs48EHlQUUVVni4r9j
 kUrDwKS2eC+8e8gP/xdIXEq3R2DsAMq+wFIswXZ3X6x4DoUV0OAJSHc9iMD4l+pr
 LyWnVpDprc6XhJHWKpuHZ5w9EuBnZFbIXdlZGFno+8UvXtusnbbuwAZzHTrRJRqe
 /AWVpFwGAoOF4KxIOFlPVBI8m4vFad/soVojC0vzIbRqaogOFZAjiL/yD5GwLmMa
 tywOEMBUJ/j2lgudTCyKn5uCa/Ew3DS1TSdenJjyqRi/gZM0IaORIhJhyFYW/eO1
 U7Uh8BnbC+4J11wwvFR5+W789dgM2+EEtAX9uI08VcE/R2ASabZlN4Zwrl0w4cb/
 VRybMT4bgmjzHRpfrqYPxpn8wqPcIw0BCeipSOjY3QU1Q25TEYQ=
 =PXXe
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-4.2-20191004' into staging

ppc patch queue 2019-10-04

Here's the next batch of ppc and spapr patches.  Includes:
  * Fist part of a large cleanup to irq infrastructure
  * Recreate the full FDT at CAS time, instead of making a difficult
    to follow set of updates.  This will help us move towards
    eliminating CAS reboots altogether
  * No longer provide RTAS blob to SLOF - SLOF can include it just as
    well itself, since guests will generally need to relocate it with
    a call to instantiate-rtas
  * A number of DFP fixes and cleanups from Mark Cave-Ayland
  * Assorted bugfixes
  * Several new small devices for powernv

# gpg: Signature made Fri 04 Oct 2019 10:35:57 BST
# gpg:                using RSA key 75F46586AE61A66CC44E87DC6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>" [full]
# gpg:                 aka "David Gibson (Red Hat) <dgibson@redhat.com>" [full]
# gpg:                 aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>" [full]
# gpg:                 aka "David Gibson (kernel.org) <dwg@kernel.org>" [unknown]
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E  87DC 6C38 CACA 20D9 B392

* remotes/dgibson/tags/ppc-for-4.2-20191004: (53 commits)
  ppc/pnv: Remove the XICSFabric Interface from the POWER9 machine
  spapr: Eliminate SpaprIrq::init hook
  spapr: Add return value to spapr_irq_check()
  spapr: Use less cryptic representation of which irq backends are supported
  xive: Improve irq claim/free path
  spapr, xics, xive: Better use of assert()s on irq claim/free paths
  spapr: Handle freeing of multiple irqs in frontend only
  spapr: Remove unhelpful tracepoints from spapr_irq_free_xics()
  spapr: Eliminate SpaprIrq:get_nodename method
  spapr: Simplify spapr_qirq() handling
  spapr: Fix indexing of XICS irqs
  spapr: Eliminate nr_irqs parameter to SpaprIrq::init
  spapr: Clarify and fix handling of nr_irqs
  spapr: Replace spapr_vio_qirq() helper with spapr_vio_irq_pulse() helper
  spapr: Fold spapr_phb_lsi_qirq() into its single caller
  xics: Create sPAPR specific ICS subtype
  xics: Merge TYPE_ICS_BASE and TYPE_ICS_SIMPLE classes
  xics: Eliminate reset hook
  xics: Rename misleading ics_simple_*() functions
  xics: Eliminate 'reject', 'resend' and 'eoi' class hooks
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-07 13:49:02 +01:00
Peter Maydell
9e5319ca52 * Compilation fix for KVM (Alex)
* SMM fix (Dmitry)
 * VFIO error reporting (Eric)
 * win32 fixes and workarounds (Marc-André)
 * qemu-pr-helper crash bugfix (Maxim)
 * Memory leak fixes (myself)
 * VMX features (myself)
 * Record-replay deadlock (Pavel)
 * i386 CPUID bits (Sebastian)
 * kconfig tweak (Thomas)
 * Valgrind fix (Thomas)
 * Autoconverge test (Yury)
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQEcBAABAgAGBQJdl3oMAAoJEL/70l94x66DPPAH/0ibsi1Mg8px8lyMHsZT6uIH
 dwlgC7zElbiKBZc/yJWGUXs/pYdjHv/vdGRXZqunTpKWvDS6Dyfkxyz+9S7A7bzo
 wFcdvLPTukrt0ZGZ7XCZzCK/dNpKivqgPtEgURy7bfOGf6uqlDC9qx5yF1BIutIX
 UXguw/+BndKlCRqSsuJA926HPtg1vuIODT7fkynCY5YCLFR26UrBvPhYFA9X8oKd
 VIRKwcH44rBjmlogcDG94ZasAjPFkujWEcsCIqmf9aH9GqNA+2YdIFyOGQ+WltV5
 eNh3ppQAwqHN5wBfRW68E58mdZznQjzOiiJuui24JuopsazpJXs+KuPUv4e63rU=
 =9iVL
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging

* Compilation fix for KVM (Alex)
* SMM fix (Dmitry)
* VFIO error reporting (Eric)
* win32 fixes and workarounds (Marc-André)
* qemu-pr-helper crash bugfix (Maxim)
* Memory leak fixes (myself)
* VMX features (myself)
* Record-replay deadlock (Pavel)
* i386 CPUID bits (Sebastian)
* kconfig tweak (Thomas)
* Valgrind fix (Thomas)
* Autoconverge test (Yury)

# gpg: Signature made Fri 04 Oct 2019 17:57:48 BST
# gpg:                using RSA key BFFBD25F78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
#      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83

* remotes/bonzini/tags/for-upstream: (29 commits)
  target/i386/kvm: Silence warning from Valgrind about uninitialized bytes
  target/i386: work around KVM_GET_MSRS bug for secondary execution controls
  target/i386: add VMX features
  vmxcap: correct the name of the variables
  target/i386: add VMX definitions
  target/i386: expand feature words to 64 bits
  target/i386: introduce generic feature dependency mechanism
  target/i386: handle filtered_features in a new function mark_unavailable_features
  tests/docker: only enable ubsan for test-clang
  win32: work around main-loop busy loop on socket/fd event
  tests: skip serial test on windows
  util: WSAEWOULDBLOCK on connect should map to EINPROGRESS
  Fix wrong behavior of cpu_memory_rw_debug() function in SMM
  memory: allow memory_region_register_iommu_notifier() to fail
  vfio: Turn the container error into an Error handle
  i386: Add CPUID bit for CLZERO and XSAVEERPTR
  docker: test-debug: disable LeakSanitizer
  lm32: do not leak memory on object_new/object_unref
  cris: do not leak struct cris_disasm_data
  mips: fix memory leaks in board initialization
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-04 18:32:34 +01:00
Thomas Huth
a1834d975f target/i386/kvm: Silence warning from Valgrind about uninitialized bytes
When I run QEMU with KVM under Valgrind, I currently get this warning:

 Syscall param ioctl(generic) points to uninitialised byte(s)
    at 0x95BA45B: ioctl (in /usr/lib64/libc-2.28.so)
    by 0x429DC3: kvm_ioctl (kvm-all.c:2365)
    by 0x51B249: kvm_arch_get_supported_msr_feature (kvm.c:469)
    by 0x4C2A49: x86_cpu_get_supported_feature_word (cpu.c:3765)
    by 0x4C4116: x86_cpu_expand_features (cpu.c:5065)
    by 0x4C7F8D: x86_cpu_realizefn (cpu.c:5242)
    by 0x5961F3: device_set_realized (qdev.c:835)
    by 0x7038F6: property_set_bool (object.c:2080)
    by 0x707EFE: object_property_set_qobject (qom-qobject.c:26)
    by 0x705814: object_property_set_bool (object.c:1338)
    by 0x498435: pc_new_cpu (pc.c:1549)
    by 0x49C67D: pc_cpus_init (pc.c:1681)
  Address 0x1ffeffee74 is on thread 1's stack
  in frame #2, created by kvm_arch_get_supported_msr_feature (kvm.c:445)

It's harmless, but a little bit annoying, so silence it by properly
initializing the whole structure with zeroes.

Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-10-04 18:49:20 +02:00
Paolo Bonzini
048c95163b target/i386: work around KVM_GET_MSRS bug for secondary execution controls
Some secondary controls are automatically enabled/disabled based on the CPUID
values that are set for the guest.  However, they are still available at a
global level and therefore should be present when KVM_GET_MSRS is sent to
/dev/kvm.

Unfortunately KVM forgot to include those, so fix that.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-10-04 18:49:20 +02:00
Paolo Bonzini
20a78b02d3 target/i386: add VMX features
Add code to convert the VMX feature words back into MSR values,
allowing the user to enable/disable VMX features as they wish.  The same
infrastructure enables support for limiting VMX features in named
CPU models.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-10-04 18:49:20 +02:00
Paolo Bonzini
704798add8 target/i386: add VMX definitions
These will be used to compile the list of VMX features for named
CPU models, and/or by the code that sets up the VMX MSRs.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-10-04 18:49:19 +02:00
Paolo Bonzini
ede146c2e7 target/i386: expand feature words to 64 bits
VMX requires 64-bit feature words for the IA32_VMX_EPT_VPID_CAP
and IA32_VMX_BASIC MSRs.  (The VMX control MSRs are 64-bit wide but
actually have only 32 bits of information).

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-10-04 18:49:19 +02:00
Paolo Bonzini
99e24dbdaa target/i386: introduce generic feature dependency mechanism
Sometimes a CPU feature does not make sense unless another is
present.  In the case of VMX features, KVM does not even allow
setting the VMX controls to some invalid combinations.

Therefore, this patch adds a generic mechanism that looks for bits
that the user explicitly cleared, and uses them to remove other bits
from the expanded CPU definition.  If these dependent bits were also
explicitly *set* by the user, this will be a warning for "-cpu check"
and an error for "-cpu enforce".  If not, then the dependent bits are
cleared silently, for convenience.

With VMX features, this will be used so that for example
"-cpu host,-rdrand" will also hide support for RDRAND exiting.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-10-04 18:49:19 +02:00
Paolo Bonzini
245edd0cfb target/i386: handle filtered_features in a new function mark_unavailable_features
The next patch will add a different reason for filtering features, unrelated
to host feature support.  Extract a new function that takes care of disabling
the features and optionally reporting them.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-10-04 18:49:19 +02:00
Dmitry Poletaev
56f997500a Fix wrong behavior of cpu_memory_rw_debug() function in SMM
There is a problem, that you don't have access to the data using cpu_memory_rw_debug() function when in SMM. You can't remotely debug SMM mode program because of that for example.
Likely attrs version of get_phys_page_debug should be used to get correct asidx at the end to handle access properly.
Here the patch to fix it.

Signed-off-by: Dmitry Poletaev <poletaev@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-10-04 18:49:18 +02:00
Sebastian Andrzej Siewior
e900135dcf i386: Add CPUID bit for CLZERO and XSAVEERPTR
The CPUID bits CLZERO and XSAVEERPTR are availble on AMD's ZEN platform
and could be passed to the guest.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-10-04 18:49:17 +02:00
Mark Cave-Ayland
428115c3a9 target/ppc: use Vsr macros in BCD helpers
This allows us to remove more endian-specific defines from int_helper.c.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-Id: <20190926204453.31837-1-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-10-04 19:08:21 +10:00
Mark Cave-Ayland
f6d4c423a2 target/ppc: remove unnecessary if() around calls to set_dfp{64,128}() in DFP macros
Now that the parameters to both set_dfp64() and set_dfp128() are exactly the
same, there is no need for an explicit if() statement to determine which
function should be called based upon size. Instead we can simply use the
preprocessor to generate the call to set_dfp##size() directly.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190926185801.11176-8-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04 19:08:21 +10:00
Mark Cave-Ayland
1ea80bf7f4 target/ppc: use existing VsrD() macro to eliminate HI_IDX and LO_IDX from dfp_helper.c
Switch over all accesses to the decimal numbers held in struct PPC_DFP from
using HI_IDX and LO_IDX to using the VsrD() macro instead. Not only does this
allow the compiler to ensure that the various dfp_* functions are being passed
a ppc_vsr_t rather than an arbitrary uint64_t pointer, but also allows the
host endian-specific HI_IDX and LO_IDX to be completely removed from
dfp_helper.c.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190926185801.11176-7-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04 19:08:21 +10:00
Mark Cave-Ayland
64b8574e14 target/ppc: change struct PPC_DFP decimal storage from uint64[2] to ppc_vsr_t
There are several places in dfp_helper.c that access the decimal number
representations in struct PPC_DFP via HI_IDX and LO_IDX defines which are set
at the top of dfp_helper.c according to the host endian.

However we can instead switch to using ppc_vsr_t for decimal numbers and then
make subsequent use of the existing VsrD() macros to access the correct
element regardless of host endian. Note that 64-bit decimals are stored in the
LSB of ppc_vsr_t (equivalent to VsrD(1)).

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190926185801.11176-6-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04 19:08:21 +10:00
Mark Cave-Ayland
474c2e931d target/ppc: introduce dfp_finalize_decimal{64,128}() helper functions
Most of the DFP helper functions call decimal{64,128}FromNumber() just before
returning in order to convert the decNumber stored in dfp.t64 back to a
Decimal{64,128} to write back to the FP registers.

Introduce new dfp_finalize_decimal{64,128}() helper functions which both enable
the parameter list to be reduced considerably, and also help minimise the
changes required in the next patch.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190926185801.11176-5-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04 19:08:21 +10:00
Mark Cave-Ayland
d9acba3130 target/ppc: update {get,set}_dfp{64,128}() helper functions to read/write DFP numbers correctly
Since commit ef96e3ae96 "target/ppc: move FP and VMX registers into aligned vsr
register array" FP registers are no longer stored consecutively in memory and so
the current method of combining FP register pairs into DFP numbers is incorrect.

Firstly update the definition of the dh_*_fprp defines in helper.h to reflect
that FP registers are now stored as part of an array of ppc_vsr_t elements
rather than plain uint64_t elements, and then introduce a new ppc_fprp_t type
which conceptually represents a DFP even-odd register pair to be consumed by the
DFP helper functions.

Finally update the new DFP {get,set}_dfp{64,128}() helper functions to convert
between DFP numbers and DFP even-odd register pairs correctly, making use of the
existing VsrD() macro to access the correct elements regardless of host endian.

Fixes: ef96e3ae96 "target/ppc: move FP and VMX registers into aligned vsr register array"
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190926185801.11176-4-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04 19:08:21 +10:00
Mark Cave-Ayland
33432d7737 target/ppc: introduce set_dfp{64,128}() helper functions
The existing functions (now incorrectly) assume that the MSB and LSB of DFP
numbers are stored as consecutive 64-bit words in memory. Instead of accessing
the DFP numbers directly, introduce set_dfp{64,128}() helper functions to ease
the switch to the correct representation.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190926185801.11176-3-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04 19:08:21 +10:00
Mark Cave-Ayland
6a8fbb9bdb target/ppc: introduce get_dfp{64,128}() helper functions
The existing functions (now incorrectly) assume that the MSB and LSB of DFP
numbers are stored as consecutive 64-bit words in memory. Instead of accessing
the DFP numbers directly, introduce get_dfp{64,128}() helper functions to ease
the switch to the correct representation.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190926185801.11176-2-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04 19:08:20 +10:00
Alexey Kardashevskiy
972bd57689 ppc/kvm: Skip writing DPDES back when in run time state
On POWER8 systems the Directed Privileged Door-bell Exception State
register (DPDES) stores doorbell pending status, one bit per a thread
of a core, set by "msgsndp" instruction. The register is shared among
threads of the same core and KVM on POWER9 emulates it in a similar way
(POWER9 does not have DPDES).

DPDES is shared but QEMU assumes all SPRs are per thread so the only safe
way to write DPDES back to VCPU before running a guest is doing so
while all threads are pulled out of the guest so DPDES cannot change.
There is only one situation when this condition is met: incoming migration
when all threads are stopped. Otherwise any QEMU HMP/QMP command causing
kvm_arch_put_registers() (for example printing registers or dumping memory)
can clobber DPDES in a race with other vcpu threads.

This changes DPDES handling so it is not written to KVM at runtime.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <20190923084110.34643-1-aik@ozlabs.ru>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04 10:25:23 +10:00
Paul A. Clarke
5c94dd3806 ppc: Use FPSCR defines instead of constants
There are FPSCR-related defines in target/ppc/cpu.h which can be used in
place of constants and explicit shifts which arguably improve the code a
bit in places.

Signed-off-by: Paul A. Clarke <pc@us.ibm.com>
Message-Id: <1568817169-1721-1-git-send-email-pc@us.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04 10:25:23 +10:00
Paul A. Clarke
bc7a45ab88 ppc: Add support for 'mffsce' instruction
ISA 3.0B added a set of Floating-Point Status and Control Register (FPSCR)
instructions: mffsce, mffscdrn, mffscdrni, mffscrn, mffscrni, mffsl.
This patch adds support for 'mffsce' instruction.

'mffsce' is identical to 'mffs', except that it also clears the exception
enable bits in the FPSCR.

On CPUs without support for 'mffsce' (below ISA 3.0), the
instruction will execute identically to 'mffs'.

Signed-off-by: Paul A. Clarke <pc@us.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <1568817082-1384-1-git-send-email-pc@us.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04 10:25:23 +10:00
Paul A. Clarke
a2735cf483 ppc: Add support for 'mffscrn','mffscrni' instructions
ISA 3.0B added a set of Floating-Point Status and Control Register (FPSCR)
instructions: mffsce, mffscdrn, mffscdrni, mffscrn, mffscrni, mffsl.
This patch adds support for 'mffscrn' and 'mffscrni' instructions.

'mffscrn' and 'mffscrni' are similar to 'mffsl', except they do not return
the status bits (FI, FR, FPRF) and they also set the rounding mode in the
FPSCR.

On CPUs without support for 'mffscrn'/'mffscrni' (below ISA 3.0), the
instructions will execute identically to 'mffs'.

Signed-off-by: Paul A. Clarke <pc@us.ibm.com>
Message-Id: <1568817081-1345-1-git-send-email-pc@us.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04 10:25:23 +10:00
Aleksandar Markovic
0a1bb9127b target/mips: msa: Move helpers for <AND|NOR|OR|XOR>.V
Cosmetic reorganization.

Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Message-Id: <1569415572-19635-21-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01 16:58:45 +02:00
Aleksandar Markovic
26f0e079a0 target/mips: msa: Simplify and move helper for MOVE.V
Achieves clearer code and slightly better performance.

Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Message-Id: <1569415572-19635-20-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01 16:58:45 +02:00
Aleksandar Markovic
a6387ea5de target/mips: msa: Split helpers for MOD_<S|U>.<B|H|W|D>
Achieves clearer code and slightly better performance.

Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Message-Id: <1569415572-19635-19-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01 16:58:45 +02:00
Aleksandar Markovic
64a0257f1f target/mips: msa: Split helpers for DIV_<S|U>.<B|H|W|D>
Achieves clearer code and slightly better performance.

Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Message-Id: <1569415572-19635-18-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01 16:58:45 +02:00
Aleksandar Markovic
1165669982 target/mips: msa: Split helpers for CLT_<S|U>.<B|H|W|D>
Achieves clearer code and slightly better performance.

Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Message-Id: <1569415572-19635-17-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01 16:58:45 +02:00
Aleksandar Markovic
0501bb1a66 target/mips: msa: Split helpers for CLE_<S|U>.<B|H|W|D>
Achieves clearer code and slightly better performance.

Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Message-Id: <1569415572-19635-16-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01 16:58:45 +02:00
Aleksandar Markovic
ade7e788e1 target/mips: msa: Split helpers for CEQ.<B|H|W|D>
Achieves clearer code and slightly better performance.

Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Message-Id: <1569415572-19635-15-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01 16:58:45 +02:00
Aleksandar Markovic
755107e226 target/mips: msa: Split helpers for AVER_<S|U>.<B|H|W|D>
Achieves clearer code and slightly better performance.

Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Message-Id: <1569415572-19635-14-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01 16:58:44 +02:00
Aleksandar Markovic
7672edc4c6 target/mips: msa: Split helpers for AVE_<S|U>.<B|H|W|D>
Achieves clearer code and slightly better performance.

Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Message-Id: <1569415572-19635-13-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01 16:58:44 +02:00
Aleksandar Markovic
a44d6d14a1 target/mips: msa: Split helpers for B<CLR|NEG|SEL>.<B|H|W|D>
Achieves clearer code and slightly better performance.

Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Message-Id: <1569415572-19635-12-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01 16:58:44 +02:00
Aleksandar Markovic
c1ed3038e7 target/mips: msa: Unroll loops and demacro <BMNZ|BMZ|BSEL>.V
Achieves clearer code and slightly better performance.

Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Message-Id: <1569415572-19635-11-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01 16:58:44 +02:00
Aleksandar Markovic
2e3eddb084 target/mips: msa: Split helpers for BINS<L|R>.<B|H|W|D>
Achieves clearer code and slightly better performance.

Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Message-Id: <1569415572-19635-10-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01 16:58:44 +02:00
Aleksandar Markovic
4c5daf386f target/mips: msa: Split helpers for PCNT.<B|H|W|D>
Achieves clearer code and slightly better performance.

Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Message-Id: <1569415572-19635-9-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01 16:58:44 +02:00
Aleksandar Markovic
81c4b05995 target/mips: msa: Split helpers for <NLOC|NLZC>.<B|H|W|D>
Achieves clearer code and slightly better performance.

Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Message-Id: <1569415572-19635-8-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01 16:58:44 +02:00
Aleksandar Markovic
05aa7e934b target/mips: Clean up translate.c
Mostly fix errors and warnings reported by 'checkpatch.pl -f'.

Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <1569331602-2586-7-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01 16:58:44 +02:00
Aleksandar Markovic
f823213c22 target/mips: Clean up mips-defs.h
Mostly fix errors and warnings reported by 'checkpatch.pl -f'.

Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <1569331602-2586-5-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01 16:41:03 +02:00
Aleksandar Markovic
f6d147bbe3 target/mips: Clean up kvm_mips.h
Mostly fix errors and warnings reported by 'checkpatch.pl -f'.

Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <1569331602-2586-4-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01 16:37:50 +02:00
Aleksandar Markovic
7ba0e95bca target/mips: Clean up internal.h
Mostly fix errors and warnings reported by 'checkpatch.pl -f'.

Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <1569331602-2586-3-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01 16:37:22 +02:00
Christian Borntraeger
c5b9ce518c s390/kvm: split kvm mem slots at 4TB
Instead of splitting at an unaligned address, we can simply split at
4TB.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Igor Mammedov <imammedo@redhat.com>
2019-09-30 13:51:50 +02:00
Igor Mammedov
fb1fc5a82b s390: do not call memory_region_allocate_system_memory() multiple times
s390 was trying to solve limited KVM memslot size issue by abusing
memory_region_allocate_system_memory(), which breaks API contract
where the function might be called only once.

Beside an invalid use of API, the approach also introduced migration
issue, since RAM chunks for each KVM_SLOT_MAX_BYTES are transferred in
migration stream as separate RAMBlocks.

After discussion [1], it was agreed to break migration from older
QEMU for guest with RAM >8Tb (as it was relatively new (since 2.12)
and considered to be not actually used downstream).
Migration should keep working for guests with less than 8TB and for
more than 8TB with QEMU 4.2 and newer binary.
In case user tries to migrate more than 8TB guest, between incompatible
QEMU versions, migration should fail gracefully due to non-exiting
RAMBlock ID or RAMBlock size mismatch.

Taking in account above and that now KVM code is able to split too
big MemorySection into several memslots, partially revert commit
 (bb223055b s390-ccw-virtio: allow for systems larger that 7.999TB)
and use kvm_set_max_memslot_size() to set KVMSlot size to
KVM_SLOT_MAX_BYTES.

1) [PATCH RFC v2 4/4] s390: do not call  memory_region_allocate_system_memory() multiple times

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20190924144751.24149-5-imammedo@redhat.com>
Acked-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2019-09-30 13:51:50 +02:00
Peter Maydell
786d36ad41 target-arm queue:
* Fix the CBAR register implementation for Cortex-A53,
    Cortex-A57, Cortex-A72
  * Fix direct booting of Linux kernels on emulated CPUs
    which have an AArch32 EL3 (incorrect NSACR settings
    meant they could not access the FPU)
  * semihosting cleanup: do more work at translate time
    and less work at runtime
 -----BEGIN PGP SIGNATURE-----
 
 iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAl2OHYsZHHBldGVyLm1h
 eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3oHlD/4iD57WzVkf2EagPg61EbqV
 KJU0bloj6lpfhI410zv6RLfSxRhuJKj1voBPl0wh/uWz4kIHBjcYZgRQGZz5+Fem
 XE4j7bLfgXlbYkjl6CFo3oqZJM+iVmMofKVbpj7nEnO6cB9nW2O4Uk88vPTqCRUp
 uip/ZveoQ3WvzyM8ERWiIiGZrvCRPnfTFvWGNEDd+ESx3ACmNbeAHilMURESkXR8
 3iRt83bzL+H7xRpVEmLvUAbjJlf+4dzyftJSwTDquLsu+g4I45BDe1ki7ip9U06B
 EvgNZ0TKchNI2kn6I4R0XAYAdZyKRONWqYTPE3xEtweihLwOKYsKfQViSHkhYxuE
 upqMfsSzpT2ivqMb5myFU8JbG6jZZGTguAZ40MQT073gckgFoFfWjAtzR0fWa/Cy
 VJ79fWIfOXrRsc76UDBeDuJ3CFEliFMSzDJWwglxlp9JX6ckfHH0Vwfmj9NPcuRw
 AeAkI7Xh+emNKftJzNtC+6Ba7jMhMLLDBoe1r3NQYK1BFg/JRtkGCja3UAswotXH
 hEYMicbMnkhOGEKxjKL0jbl33XKKAVq3pens2tT0QIz3Xqzh9iIcceCnv4MsddK9
 MPU8yfQYcj6eNxVBLofhuRGURMK4BpQzj2Rxg03G3dRpFuNEwneUrx64q8lEv4Y5
 EWSFxOoBPEpooiMCoboZ/A==
 =/0m2
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20190927' into staging

target-arm queue:
 * Fix the CBAR register implementation for Cortex-A53,
   Cortex-A57, Cortex-A72
 * Fix direct booting of Linux kernels on emulated CPUs
   which have an AArch32 EL3 (incorrect NSACR settings
   meant they could not access the FPU)
 * semihosting cleanup: do more work at translate time
   and less work at runtime

# gpg: Signature made Fri 27 Sep 2019 15:32:43 BST
# gpg:                using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg:                issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate]
# gpg:                 aka "Peter Maydell <pmaydell@gmail.com>" [ultimate]
# gpg:                 aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate]
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83  15CF 3C25 25ED 1436 0CDE

* remotes/pmaydell/tags/pull-target-arm-20190927:
  hw/arm/boot: Use the IEC binary prefix definitions
  hw/arm/boot.c: Set NSACR.{CP11,CP10} for NS kernel boots
  tests/tcg: add linux-user semihosting smoke test for ARM
  target/arm: remove run-time semihosting checks for linux-user
  target/arm: remove run time semihosting checks
  target/arm: handle A-profile semihosting at translate time
  target/arm: handle M-profile semihosting at translate time
  tests/tcg: clean-up some comments after the de-tangling
  target/arm: fix CBAR register for AArch64 CPUs

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>

# Conflicts:
#	tests/tcg/arm/Makefile.target
2019-09-30 11:02:22 +01:00
Alex Bennée
ed6e6ba9c4 target/arm: remove run time semihosting checks
Now we do all our checking and use a common EXCP_SEMIHOST for
semihosting operations we can make helper code a lot simpler.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190913151845.12582-5-alex.bennee@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-27 11:41:31 +01:00
Alex Bennée
5651697f1f target/arm: handle A-profile semihosting at translate time
As for the other semihosting calls we can resolve this at translate
time.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190913151845.12582-4-alex.bennee@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-27 11:41:30 +01:00
Alex Bennée
376214e4f4 target/arm: handle M-profile semihosting at translate time
We do this for other semihosting calls so we might as well do it for
M-profile as well.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190913151845.12582-3-alex.bennee@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-27 11:41:29 +01:00
Luc Michel
d56974afe9 target/arm: fix CBAR register for AArch64 CPUs
For AArch64 CPUs with a CBAR register, we have two views for it:
  - in AArch64 state, the CBAR_EL1 register (S3_1_C15_C3_0), returns the
    full 64 bits CBAR value
  - in AArch32 state, the CBAR register (cp15, opc1=1, CRn=15, CRm=3, opc2=0)
    returns a 32 bits view such that:
      CBAR = CBAR_EL1[31:18] 0..0 CBAR_EL1[43:32]

This commit fixes the current implementation where:
  - CBAR_EL1 was returning the 32 bits view instead of the full 64 bits
    value,
  - CBAR was returning a truncated 32 bits version of the full 64 bits
    one, instead of the 32 bits view
  - CBAR was declared as cp15, opc1=4, CRn=15, CRm=0, opc2=0, which is
    the CBAR register found in the ARMv7 Cortex-Ax CPUs, but not in
    ARMv8 CPUs.

Signed-off-by: Luc Michel <luc.michel@greensocs.com>
Message-id: 20190912110103.1417887-1-luc.michel@greensocs.com
[PMM: Added a comment about the two different kinds of CBAR]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-27 11:41:28 +01:00
Philippe Mathieu-Daudé
754f287176 target/i386: Fix broken build with WHPX enabled
The WHPX build is broken since commit 12e9493df9 which removed the
"hw/boards.h" where MachineState is declared:

  $ ./configure \
     --enable-hax --enable-whpx

  $ make x86_64-softmmu/all
  [...]
    CC      x86_64-softmmu/target/i386/whpx-all.o
  target/i386/whpx-all.c: In function 'whpx_accel_init':
  target/i386/whpx-all.c:1378:25: error: dereferencing pointer to
  incomplete type 'MachineState' {aka 'struct MachineState'}
       whpx->mem_quota = ms->ram_size;
                           ^~
  make[1]: *** [rules.mak:69: target/i386/whpx-all.o] Error 1
    CC      x86_64-softmmu/trace/generated-helpers.o
  make[1]: Target 'all' not remade because of errors.
  make: *** [Makefile:471: x86_64-softmmu/all] Error 2

Restore this header, partially reverting commit 12e9493df9.

Fixes: 12e9493df9
Reported-by: Ilias Maratos <i.maratos@gmail.com>
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20190920113329.16787-2-philmd@redhat.com>
2019-09-26 19:00:53 +01:00
Richard Henderson
11bfdbdfc2 target/alpha: Tidy helper_fp_exc_raise_s
Remove a redundant masking of ignore.  Once that's gone it is
obvious that the system-mode inner test is redundant with the
outer test.  Move the fpcr_exc_enable masking up and tidy.

No functional change.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190921043256.4575-8-richard.henderson@linaro.org>
2019-09-26 19:00:53 +01:00
Richard Henderson
8009307031 target/alpha: Mask IOV exception with INV for user-only
The kernel masks the integer overflow exception with the
software invalid exception mask.  Include IOV in the set
of exception bits masked by fpcr_exc_enable.

Fixes the new float_convs test.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20190921043256.4575-7-richard.henderson@linaro.org>
2019-09-26 19:00:53 +01:00
Richard Henderson
a8938e5fdb target/alpha: Write to fpcr_flush_to_zero once
Tidy the computation of the value; no functional change.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20190921043256.4575-6-richard.henderson@linaro.org>
2019-09-26 19:00:53 +01:00
Richard Henderson
8cd9990526 target/alpha: Handle SWCR_MAP_DMZ earlier
Since we're converting the swcr to fpcr format for exceptions,
it's trivial to add FPCR_DNZ to the set of fpcr bits overriden.
No functional change.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20190921043256.4575-5-richard.henderson@linaro.org>
2019-09-26 19:00:53 +01:00
Richard Henderson
106e1319cc target/alpha: Fix SWCR_TRAP_ENABLE_MASK
The CONFIG_USER_ONLY adjustment blindly mashed the swcr
exception enable bits into the fpcr exception disable bits.

However, fpcr_exc_enable has already converted the exception
disable bits into the exception status bits in order to make
it easier to mask status bits at runtime.

Instead, merge the swcr enable bits with the fpcr before we
convert to status bits.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190921043256.4575-4-richard.henderson@linaro.org>
2019-09-26 19:00:53 +01:00
Richard Henderson
712e7c6112 target/alpha: Fix SWCR_MAP_UMZ
We were setting the wrong bit.  The fp_status.flush_to_zero
setting is overwritten by either the constant 1 or the value
of fpcr_flush_to_zero depending on bits within an fp insn.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20190921043256.4575-3-richard.henderson@linaro.org>
2019-09-26 19:00:53 +01:00
Richard Henderson
ea937dedec target/alpha: Use array for FPCR_DYN conversion
This is a bit more straight-forward than using a switch statement.
No functional change.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190921043256.4575-2-richard.henderson@linaro.org>
2019-09-26 19:00:53 +01:00
Peter Maydell
2f93a3ecdd Fix a bunch of BUGs in the mem-helpers (including the MVC instruction),
especially, to make them behave correctly on faults.
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCAAvFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl2Ie9ERHGRhdmlkQHJl
 ZGhhdC5jb20ACgkQTd4Q9wD/g1qMNQ//an/e3uXES+LINt/L/UeMpTVxrKCjqz/0
 xlzN3eZmCTKTWb+PD0BRA5tsLCynI731gZQZgHSkgnNz3bEMklJELdUNL40N1idn
 9ey1ZDOGOtGKNYzoe6C53V0H7T2i1WIhoCNeuUSVjcnGL43RVYJB+gCOwCWKQhgR
 k7wJ7b5riJtNBFy9cx8egjP+Rw+eJ/9e+lDccVAOF4avvvI0gNMN9Fq8N6q2y4DM
 1stzhwuBP4rsl6SwCyvEdaISwVpcnQWQTp0z46ighrqWV/hJnibVQtDGVSvb3+Dr
 Kn+oHaN1VYwgaAacVAzgW+HMak1BRWIf8OGGfZlihUMubUCOHANkj9eIqVAwF8qy
 5hVJMfCA9HpCSy0Va9zmVZ9/2EVJfLacykULxveXKX9e9AU+awoQJIvnRIzpc5Q0
 zvPK/871icXPx9UR46aOt8i8CBOun3HmsfCPyuOvzliCHuKyVqbBfdqOl7BHp5LL
 aMp/DB51DIfvQhSRDFdRR2f5o9KvYcafG72F7pWnLZ83TXVHhVqza03ArnjOa/Gi
 Yty+LMEVYWzfmFq8GpA/9bJvY7Bd/EfkO3xXnlJjEFRPk/XTnzsO5c3flcK9hgnY
 tSlqTj/4k+oyKiV7WTYzixTAu4tUhcKOEDkQ4j6kwUMJUTkueqDTAe1Ycxkzjqy5
 QfGdmuKpE8U=
 =839a
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/davidhildenbrand/tags/s390x-tcg-2019-09-23' into staging

Fix a bunch of BUGs in the mem-helpers (including the MVC instruction),
especially, to make them behave correctly on faults.

# gpg: Signature made Mon 23 Sep 2019 09:01:21 BST
# gpg:                using RSA key 1BD9CAAD735C4C3A460DFCCA4DDE10F700FF835A
# gpg:                issuer "david@redhat.com"
# gpg: Good signature from "David Hildenbrand <david@redhat.com>" [unknown]
# gpg:                 aka "David Hildenbrand <davidhildenbrand@gmail.com>" [full]
# Primary key fingerprint: 1BD9 CAAD 735C 4C3A 460D  FCCA 4DDE 10F7 00FF 835A

* remotes/davidhildenbrand/tags/s390x-tcg-2019-09-23: (30 commits)
  tests/tcg: target/s390x: Test MVC
  tests/tcg: target/s390x: Test MVO
  s390x/tcg: MVO: Fault-safe handling
  s390x/tcg: MVST: Fault-safe handling
  s390x/tcg: MVZ: Fault-safe handling
  s390x/tcg: MVN: Fault-safe handling
  s390x/tcg: MVCIN: Fault-safe handling
  s390x/tcg: NC: Fault-safe handling
  s390x/tcg: XC: Fault-safe handling
  s390x/tcg: OC: Fault-safe handling
  s390x/tcg: MVCLU: Fault-safe handling
  s390x/tcg: MVC: Fault-safe handling on destructive overlaps
  s390x/tcg: MVCS/MVCP: Use access_memmove()
  s390x/tcg: Fault-safe memmove
  s390x/tcg: Fault-safe memset
  s390x/tcg: Always use MMU_USER_IDX for CONFIG_USER_ONLY
  s390x/tcg: MVST: Fix storing back the addresses to registers
  s390x/tcg: MVST: Check for specification exceptions
  s390x/tcg: MVCS/MVCP: Properly wrap the length
  s390x/tcg: MVCOS: Lengths are 32 bit in 24/31-bit mode
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-23 15:44:52 +01:00
David Hildenbrand
ab89acd0b7 s390x/tcg: MVO: Fault-safe handling
Each operand can have a maximum length of 16. Make sure to prepare all
reads/writes before writing.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23 09:28:29 +02:00
David Hildenbrand
bb36ed88e9 s390x/tcg: MVST: Fault-safe handling
Access at most single pages and document why. Using the access helpers
might over-indicate watchpoints within the same page, I guess we can
live with that.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23 09:28:29 +02:00
David Hildenbrand
6514f42bf8 s390x/tcg: MVZ: Fault-safe handling
We can process a maximum of 256 bytes, crossing two pages.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23 09:28:29 +02:00
David Hildenbrand
ab8bab68bb s390x/tcg: MVN: Fault-safe handling
We can process a maximum of 256 bytes, crossing two pages.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23 09:28:29 +02:00
David Hildenbrand
65a27df927 s390x/tcg: MVCIN: Fault-safe handling
We can process a maximum of 256 bytes, crossing two pages. Calculate the
accessed range upfront - src is accessed right-to-left.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23 09:28:29 +02:00
David Hildenbrand
4d78c68baf s390x/tcg: NC: Fault-safe handling
We can process a maximum of 256 bytes, crossing two pages.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23 09:28:29 +02:00
David Hildenbrand
a8821dd56e s390x/tcg: XC: Fault-safe handling
We can process a maximum of 256 bytes, crossing two pages. While at it,
increment the length once.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23 09:28:29 +02:00
David Hildenbrand
8c4a732076 s390x/tcg: OC: Fault-safe handling
We can process a maximum of 256 bytes, crossing two pages.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23 09:28:29 +02:00
David Hildenbrand
b8e7b2fe1d s390x/tcg: MVCLU: Fault-safe handling
The last remaining bit is padding with two bytes.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23 09:28:29 +02:00
David Hildenbrand
b7809f3692 s390x/tcg: MVC: Fault-safe handling on destructive overlaps
The last remaining bit for MVC is handling destructive overlaps in a
fault-safe way.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23 09:28:29 +02:00
David Hildenbrand
efb1a76ef9 s390x/tcg: MVCS/MVCP: Use access_memmove()
As we are moving between address spaces, we can use access_memmove()
without checking for destructive overlaps (especially of real storage
locations):
    "Each storage operand is processed left to right. The
    storage-operand-consistency rules are the same as
    for MOVE (MVC), except that when the operands
    overlap in real storage, the use of the common real-
    storage locations is not necessarily recognized."

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23 09:28:29 +02:00
David Hildenbrand
b6c636f2cd s390x/tcg: Fault-safe memmove
Replace fast_memmove() variants by access_memmove() variants, that
first try to probe access to all affected pages (maximum is two pages).

Introduce access_get_byte()/access_set_byte(). We might be able to speed
up memmove in special cases even further (do single-byte access, use
memmove() for remaining bytes in page), however, we'll skip that for now.

In MVCOS, simply always call access_memmove_as() and drop the TODO
about LAP. LAP is already handled in the MMU.

Get rid of adj_len_to_page(), which is now unused.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23 09:28:29 +02:00
David Hildenbrand
70ebd9ce1c s390x/tcg: Fault-safe memset
Replace fast_memset() by access_memset(), that first tries to probe
access to all affected pages (maximum is two). We'll use the same
mechanism for other types of accesses soon.

Only in very rare cases (especially TLB_NOTDIRTY), we'll have to
fallback to ld/st helpers. Try to speed up that case as suggested by
Richard.

We'll rework most involved handlers soon to do all accesses via new
fault-safe helpers, especially MVC.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23 09:28:29 +02:00
David Hildenbrand
817791e839 s390x/tcg: Always use MMU_USER_IDX for CONFIG_USER_ONLY
Although we basically ignore the index all the time for CONFIG_USER_ONLY,
let's simply skip all the checks and always return MMU_USER_IDX in
cpu_mmu_index() and get_mem_index().

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23 09:28:29 +02:00
David Hildenbrand
2bb525e20d s390x/tcg: MVST: Fix storing back the addresses to registers
24 and 31-bit address space handling is wrong when it comes to storing
back the addresses to the register.

While at it, read gprs 0 implicitly.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23 09:28:29 +02:00
David Hildenbrand
087b8193ed s390x/tcg: MVST: Check for specification exceptions
Bit position 32-55 of general register 0 must be zero.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23 09:28:29 +02:00
David Hildenbrand
373290d8a8 s390x/tcg: MVCS/MVCP: Properly wrap the length
... and don't perform any move in case the length is zero.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23 09:28:29 +02:00
David Hildenbrand
a7627565ae s390x/tcg: MVCOS: Lengths are 32 bit in 24/31-bit mode
Triggered by a review comment from Richard, also MVCOS has a 32-bit
length in 24/31-bit addressing mode. Add a new helper.

Rename wrap_length() to wrap_length31().

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23 09:28:29 +02:00
David Hildenbrand
43df3e71e3 s390x/tcg: MVCS/MVCP: Check for special operation exceptions
Let's perform the documented checks.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23 09:28:29 +02:00
David Hildenbrand
86678418b2 s390x/tcg: MVCLU/MVCLE: Process max 4k bytes at a time
Let's stay within single pages.

... and indicate cc=3 in case there is work remaining. Keep unicode
padding simple.

While reworking, properly wrap the addresses.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23 09:28:29 +02:00
David Hildenbrand
a3910396ba s390x/tcg: MVPG: Properly wrap the addresses
We have to mask of any unused bits. While at it, document what exactly is
missing.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23 09:28:29 +02:00
David Hildenbrand
bf349f1a0d s390x/tcg: MVPG: Check for specification exceptions
Perform the checks documented in the PoP.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23 09:28:29 +02:00
David Hildenbrand
b7dd1f7fd4 s390x/tcg: MVC: Use is_destructive_overlap()
Let's use the new helper, that also detects destructive overlaps when
wrapping.

We'll make the remaining code (e.g., fast_memmove()) aware of wrapping
later.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23 09:28:29 +02:00
David Hildenbrand
d573ffde0c s390x/tcg: MVC: Increment the length once
Let's increment the length once.

While at it, cleanup the comment. The memset() example is given as a
programming note in the PoP, so drop the description.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23 09:28:29 +02:00
David Hildenbrand
f1c2e27cb5 s390x/tcg: MVCL: Process max 4k bytes at a time
Process max 4k bytes at a time, writing back registers between the
accesses. The instruction is interruptible.
    "For operands longer than 2K bytes, access exceptions are not
    recognized for locations more than 2K bytes beyond the current location
    being processed."
Note that on z/Architecture, 2k vs. 4k access cannot get differentiated as
long as pages are not crossed. This seems to be a leftover from ESA/390.
Simply stay within single pages.

MVCL handling is quite different than MVCLE/MVCLU handling, so split up
the handlers.

Defer interrupt handling, as that will require more thought, add a TODO
for that.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23 09:28:29 +02:00
David Hildenbrand
fbc17598d5 s390x/tcg: MVCL: Detect destructive overlaps
We'll have to zero-out unused bit positions, so make sure to write the
addresses back.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23 09:28:29 +02:00
David Hildenbrand
d292671ade s390x/tcg: MVCL: Zero out unused bits of address
We have to zero out unused bits in 24 and 31-bit addressing mode.
Provide a new helper.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23 09:28:29 +02:00
David Hildenbrand
bed04a2b9c s390x/tcg: Reset exception_index to -1 instead of 0
We use the marker "-1" for "no exception". s390_cpu_do_interrupt() might
get confused by that.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23 09:28:29 +02:00
Christian Borntraeger
7505deca0b s390x/cpumodel: Add the z15 name to the description of gen15a
We now know that gen15a is called z15.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2019-09-23 09:15:28 +02:00
Thomas Huth
7d69e8bc3b s390x/kvm: Officially require at least kernel 3.15
Since QEMU v2.10, the KVM acceleration does not work on older kernels
anymore since the code accidentally requires the KVM_CAP_DEVICE_CTRL
capability now - it should have been optional instead.
Instead of fixing the bug, we asked in the ChangeLog of QEMU 2.11 - 3.0
that people should speak up if they still need support of QEMU running
with KVM on older kernels, but seems like nobody really complained.
Thus let's make this official now and turn it into a proper error
message, telling the users to use at least kernel 3.15 now.

Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20190913091443.27565-1-thuth@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2019-09-23 09:15:03 +02:00
Peter Maydell
f5c7af6295 Trivial patches 20190919
-----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEEzS913cjjpNwuT1Fz8ww4vT8vvjwFAl2Dh78SHGxhdXJlbnRA
 dml2aWVyLmV1AAoJEPMMOL0/L748znUP/3gcyTubxxZgk8P4ufRn/9QY7O/Ohsj+
 /8QikSdYFkPPGUcGeuQDHmN4KZbUnoEZuI4QxnlLNc1aWg69+0+JygMsSE2OTRVV
 BoPWnxpFy5PPDfpYoR8DfIgmKocdpBSTg50u57W3NciJRzIYSol5AOpy4ANOiDp4
 V8jXkWtTIXCHytKyWppxpzZjmJADQRgbFwS58+CvWoLRqO7xqWnJKfdvoQzDsGlj
 6I0CbxTwDdBJJrYG8gUhyj/Gz4VEfZWskn4Jt+/Uxw2Ll7aOLPgF0mf4g6h1EZ/V
 j5wVHXntR6cYdQ1jpi3unUQ3nkaGOssLbj0rA8//KjLU+VxlYdJFQj7BSSBbYLWk
 BGKw/rhFZw1kXk32kaZPtd0VgCbQu7e81/7k2/Yc2u6vg8BRIJCjOK4H8V58QOQ6
 ISu1ngrRAj9Tzoqv79ZQw+jZFD7nrEC+Vl9KcZQq245Ju5WrFxw8D4Mk3EnT5Pn8
 3AAZswquOhC7MELZAbWUc2UEF72rqvuiH8OoqXcMmCOuFLZwNmuj2lCOCbiFBtjA
 qzvqTunKIt6OcLtF0NPoJ2iSHoMhqnZ7dJEKq88T2lGPXdUfpMqXcRrj6milj6Ow
 4d/7KNwAZyCYJfwbNz71bpn2EpxPPpA835iMKyAzrNOWYIQ7JBlQMvHeKaqeNps/
 GlXSETiFMl6G
 =WZXd
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/vivier2/tags/trivial-branch-pull-request' into staging

Trivial patches 20190919

# gpg: Signature made Thu 19 Sep 2019 14:50:55 BST
# gpg:                using RSA key CD2F75DDC8E3A4DC2E4F5173F30C38BD3F2FBE3C
# gpg:                issuer "laurent@vivier.eu"
# gpg: Good signature from "Laurent Vivier <lvivier@redhat.com>" [full]
# gpg:                 aka "Laurent Vivier <laurent@vivier.eu>" [full]
# gpg:                 aka "Laurent Vivier (Red Hat) <lvivier@redhat.com>" [full]
# Primary key fingerprint: CD2F 75DD C8E3 A4DC 2E4F  5173 F30C 38BD 3F2F BE3C

* remotes/vivier2/tags/trivial-branch-pull-request:
  configure: Add xkbcommon configure options
  kvm: Fix typo in header of kvm_device_access()
  Fix cacheline detection on FreeBSD/powerpc.
  build: Don't ignore qapi-visit-core.c
  target/m68k/fpu_helper.c: rename the access arguments
  Replace '-machine accel=xyz' with '-accel xyz'
  cutils: Move size_to_str() from "qemu-common.h" to "qemu/cutils.h"
  vfio: fix a typo

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-20 13:58:04 +01:00
KONRAD Frederic
198d7003f1 target/m68k/fpu_helper.c: rename the access arguments
The "access" arguments clash with a macro under Windows with MinGW:
  CC      m68k-softmmu/target/m68k/fpu_helper.o
  target/m68k/fpu_helper.c: In function 'fmovem_predec':
  target/m68k/fpu_helper.c:405:56: error: macro "access" passed 4 arguments,
   but takes just 2
               size = access(env, addr, &env->fregs[i], ra);

So this renames them access_fn.

Tested with:
 ./configure --target-list=m68k-softmmu
 make -j8

Signed-off-by: KONRAD Frederic <frederic.konrad@adacore.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <1568296920-29939-1-git-send-email-frederic.konrad@adacore.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2019-09-19 12:12:19 +02:00
KONRAD Frederic
b3e8692918
gdbstub: riscv: fix the fflags registers
While debugging an application with GDB the following might happen:

(gdb) return
Make xxx return now? (y or n) y
Could not fetch register "fflags"; remote failure reply 'E14'

This is because riscv_gdb_get_fpu calls riscv_csrrw_debug with a wrong csr
number (8). It should use the csr_register_map in order to reach the
riscv_cpu_get_fflags callback.

Signed-off-by: KONRAD Frederic <frederic.konrad@adacore.com>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2019-09-17 08:42:50 -07:00
Alistair Francis
bdce1a5c6d
target/riscv: Use TB_FLAGS_MSTATUS_FS for floating point
Use the TB_FLAGS_MSTATUS_FS macro when enabling floating point in the tb
flags.

Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2019-09-17 08:42:50 -07:00
Alistair Francis
14115b91dd
target/riscv: Fix mstatus dirty mask
This is meant to mask off the hypervisor bits, but a typo caused it to
mask MPP instead.

Fixes: 1f0419cb04 ("target/riscv: Allow setting mstatus virtulisation bits")
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2019-09-17 08:42:50 -07:00
Atish Patra
a9f37afab1
target/riscv: Use both register name and ABI name
Use both the generic register name and ABI name for the general purpose
registers and floating point registers.

Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2019-09-17 08:42:50 -07:00
Bin Meng
df42fdd6cc
riscv: hmp: Add a command to show virtual memory mappings
This adds 'info mem' command for RISC-V, to show virtual memory
mappings that aids debugging.

Rather than showing every valid PTE, the command compacts the
output by merging all contiguous physical address mappings into
one block and only shows the merged block mapping details.

Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Acked-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2019-09-17 08:42:43 -07:00
Bin Meng
ddf7813228
riscv: rv32: Root page table address can be larger than 32-bit
For RV32, the root page table's PPN has 22 bits hence its address
bits could be larger than the maximum bits that target_ulong is
able to represent. Use hwaddr instead.

Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2019-09-17 08:42:43 -07:00
Alistair Francis
7f8dcfeb87
target/riscv: Update the Hypervisor CSRs to v0.4
Update the Hypervisor CSR addresses to match the v0.4 spec.

Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2019-09-17 08:42:43 -07:00
Alistair Francis
b345b48078
target/riscv: Create function to test if FP is enabled
Let's create a function that tests if floating point support is
enabled. We can then protect all floating point operations based on if
they are enabled.

This patch so far doesn't change anything, it's just preparing for the
Hypervisor support for floating point operations.

Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Christophe de Dinechin <dinechin@redhat.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2019-09-17 08:42:42 -07:00
Philippe Mathieu-Daudé
6591efb549
target/riscv/pmp: Convert qemu_log_mask(LOG_TRACE) to trace events
Use the always-compiled trace events, remove the now unused
RISCV_DEBUG_PMP definition.

Note pmpaddr_csr_read() could previously do out-of-bound accesses
passing addr_index >= MAX_RISCV_PMPS.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2019-09-17 08:42:42 -07:00
Philippe Mathieu-Daudé
0b84b6629d
target/riscv/pmp: Restrict priviledged PMP to system-mode emulation
The RISC-V Physical Memory Protection is restricted to privileged
modes. Restrict its compilation to QEMU system builds.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2019-09-17 08:42:42 -07:00
Peter Maydell
f8c3db33a5 target/sparc: Switch to do_transaction_failed() hook
Switch the SPARC target from the old unassigned_access hook to the
new do_transaction_failed hook.

This will cause the "if transaction failed" code paths added in
the previous commits to become active if the access is to an
unassigned address. In particular we'll now handle bus errors
during page table walks correctly (generating a translation
error with the right kind of fault status).

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-id: 20190801183012.17564-8-peter.maydell@linaro.org
2019-09-17 12:01:00 +01:00
Peter Maydell
9dffeec2e0 target/sparc: Remove unused ldl_phys from dump_mmu()
The dump_mmu() function does a ldl_phys() at the start, but
then never uses the value it loads at all. Remove the
unused code.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-id: 20190801183012.17564-7-peter.maydell@linaro.org
2019-09-17 12:01:00 +01:00
Peter Maydell
d86a9ad33c target/sparc: Handle bus errors in mmu_probe()
Convert the mmu_probe() function to using address_space_ldl()
rather than ldl_phys(), so we can explicitly detect memory
transaction failures.

This makes no practical difference at the moment, because
ldl_phys() will return 0 on a transaction failure, and we
treat transaction failures and 0 PDEs identically. However
the spec says that MMU probe operations are supposed to
update the fault status registers, and if we ever implement
that we'll want to distinguish the difference. For the
moment, just add a TODO comment about the bug.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-id: 20190801183012.17564-6-peter.maydell@linaro.org
2019-09-17 12:01:00 +01:00
Peter Maydell
3c818dfcc2 target/sparc: Correctly handle bus errors in page table walks
Currently we use the ldl_phys() function to read page table entries.
With the unassigned_access hook in place, if these hit an unassigned
area of memory then the hook will cause us to wrongly generate
an exception with a fault address matching the address of the
page table entry.

Change to using address_space_ldl() so we can detect and correctly
handle bus errors and give them their correct behaviour of
causing a translation error with a suitable fault status register.

Note that this won't actually take effect until we switch the
over to using the do_translation_failed hook.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-id: 20190801183012.17564-5-peter.maydell@linaro.org
2019-09-17 12:01:00 +01:00
Peter Maydell
776095d3cd target/sparc: Check for transaction failures in MXCC stream ASI accesses
Currently the ld/st_asi helper functions make calls to the
ld*_phys() and st*_phys() functions for those ASIs which
imply direct accesses to physical addresses. These implicitly
rely on the unassigned_access hook to cause them to generate
an MMU fault if the access fails.

Switch to using the address_space_* functions instead, which
return a MemTxResult that we can check. This means that when
we switch SPARC over to using the do_transaction_failed hook
we'll still get the same MMU faults we did before.

This commit converts the ASIs which do MXCC stream source
and destination accesses.

It's not clear to me whether raising an MMU fault like this
is the correct behaviour if we encounter a bus error, but
we retain the same behaviour that the old unassigned_access
hook would implement.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-id: 20190801183012.17564-4-peter.maydell@linaro.org
2019-09-17 12:01:00 +01:00
Peter Maydell
b9f5fdad49 target/sparc: Check for transaction failures in MMU passthrough ASIs
Currently the ld/st_asi helper functions make calls to the
ld*_phys() and st*_phys() functions for those ASIs which
imply direct accesses to physical addresses. These implicitly
rely on the unassigned_access hook to cause them to generate
an MMU fault if the access fails.

Switch to using the address_space_* functions instead, which
return a MemTxResult that we can check. This means that when
we switch SPARC over to using the do_transaction_failed hook
we'll still get the same MMU faults we did before.

This commit converts the ASIs which do "MMU passthrough".

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-id: 20190801183012.17564-3-peter.maydell@linaro.org
2019-09-17 12:01:00 +01:00
Peter Maydell
c9d793f446 target/sparc: Factor out the body of sparc_cpu_unassigned_access()
Currently the SPARC target uses the old-style do_unassigned_access
hook.  We want to switch it over to do_transaction_failed, but to do
this we must first remove all the direct calls in ldst_helper.c to
cpu_unassigned_access().  Factor out the body of the hook function's
code into a new sparc_raise_mmu_fault() and call it from the hook and
from the various places that used to call cpu_unassigned_access().

In passing, this fixes a bug where the code that raised the
MMU exception was directly calling GETPC() from a function that
was several levels deep in the callstack from the original
helper function: the new sparc_raise_mmu_fault() instead takes
the return address as an argument.

Other than the use of retaddr rather than GETPC() and a comment
format fixup, the body of the new function has no changes from
that of the old hook function.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-id: 20190801183012.17564-2-peter.maydell@linaro.org
2019-09-17 12:01:00 +01:00
Peter Maydell
186c0ab9b9 * Fix Patchew CI failures (myself)
* i386 fw_cfg refactoring (Philippe)
 * pmem bugfix (Stefan)
 * Support for accessing cstate MSRs (Wanpeng)
 * exec.c cleanups (Wei Yang)
 * Improved throttling (Yury)
 * elf-ops.h coverity fix (Stefano)
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQEcBAABAgAGBQJdf6aIAAoJEL/70l94x66DEuEH/1UNNCRlpfCpviPYG4OIEIlG
 3Zu69X8DRJUgp29//9p+hwaUnFZjOIBLwvzUW+RVoqBrRmQgR9h8oxlsMbOD7XHV
 QwlseTC44/iatzldBckljqYP0kZQA48qbNokcaknifzmUvZJS9C3Z41/8BIQJpZ5
 MLi9TIjkU2Y/Kb3P8oxV0ic0jslJFz7Rf5Z4QT+iDGRXBFKQ6SxcboDD00h8RZO2
 BEtqDVUSM6dkH7Fa6k7T3S4DoEvwn2hHapoQxgu0YDMe2olNQBdRqzA/DRYbHh65
 ASr+eFLLRcqpiKgL5qPbLK1Ff7EMwWOlmlff3Px8Gj3h9U9k0FRi83xAHfGrfIc=
 =vjW5
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging

* Fix Patchew CI failures (myself)
* i386 fw_cfg refactoring (Philippe)
* pmem bugfix (Stefan)
* Support for accessing cstate MSRs (Wanpeng)
* exec.c cleanups (Wei Yang)
* Improved throttling (Yury)
* elf-ops.h coverity fix (Stefano)

# gpg: Signature made Mon 16 Sep 2019 16:13:12 BST
# gpg:                using RSA key BFFBD25F78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
#      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83

* remotes/bonzini/tags/for-upstream: (29 commits)
  hw/i386/pc: Extract the x86 generic fw_cfg code
  hw/i386/pc: Rename pc_build_feature_control() as generic fw_cfg_build_*
  hw/i386/pc: Let pc_build_feature_control() take a MachineState argument
  hw/i386/pc: Let pc_build_feature_control() take a FWCfgState argument
  hw/i386/pc: Rename pc_build_smbios() as generic fw_cfg_build_smbios()
  hw/i386/pc: Let pc_build_smbios() take a generic MachineState argument
  hw/i386/pc: Let pc_build_smbios() take a FWCfgState argument
  hw/i386/pc: Replace PCMachineState argument with MachineState in fw_cfg_arch_create
  hw/i386/pc: Pass the CPUArchIdList array by argument
  hw/i386/pc: Pass the apic_id_limit value by argument
  hw/i386/pc: Pass the boot_cpus value by argument
  hw/i386/pc: Rename bochs_bios_init as more generic fw_cfg_arch_create
  hw/i386/pc: Use address_space_memory in place
  hw/i386/pc: Extract e820 memory layout code
  hw/i386/pc: Use e820_get_num_entries() to access e820_entries
  cpus: Fix throttling during vm_stop
  qemu-thread: Add qemu_cond_timedwait
  memory: inline and optimize devend_memop
  memory: fetch pmem size in get_file_size()
  elf-ops.h: fix int overflow in load_elf()
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-17 10:20:17 +01:00
Philippe Mathieu-Daudé
d6d059ca07 hw/i386/pc: Extract e820 memory layout code
Suggested-by: Samuel Ortiz <sameo@linux.intel.com>
Reviewed-by: Li Qiang <liq3ea@gmail.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190818225414.22590-3-philmd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-09-16 17:13:07 +02:00
Peter Maydell
6f214b3044 Two temp live across branch fixes.
-----BEGIN PGP SIGNATURE-----
 
 iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAl1+QRYdHHJpY2hhcmQu
 aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV9xBwgAlSjGIAOWTEtg28zq
 hHfnEhmAN4NZIiVHaFoVDu+6UWdK/pMQwh7PJsvgiv3PmwNaxR/sP0dPkSr4wlhd
 noYqs2+2ghwh+Q81OJ2A9az6H5hyeEmA9raWDaIbOzVjAembicTytCQ2xxVBHqMe
 7FZi6720j99tY88xbhs7YiDnlM4IgGWLx57n9VXbF2tDRZb/LQZQU5OFVtCBVtOK
 S76qj0ydQ8zj83yl81ddDmYWj4XvY9yDD7KaDKfq1d78k7OMLUFuwiP1LYLRGu79
 ne5jZ85QVGlA4Wf0wCFdLOOUJeenikqzOA66Hb2H3zERUPTUCSVMEjjLbPQDHw5q
 xCiFdQ==
 =DZar
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/rth/tags/pull-hppa-20190915' into staging

Two temp live across branch fixes.

# gpg: Signature made Sun 15 Sep 2019 14:48:06 BST
# gpg:                using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg:                issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full]
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A  05C0 64DF 38E8 AF7E 215F

* remotes/rth/tags/pull-hppa-20190915:
  target/hppa: prevent trashing of temporary in do_depw_sar()
  target/hppa: prevent trashing of temporary in trans_mtctl()

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-16 13:21:28 +01:00
Wanpeng Li
d38d201f0e i386/kvm: support guest access CORE cstate
Allow guest reads CORE cstate when exposing host CPU power management capabilities
to the guest. PKG cstate is restricted to avoid a guest to get the whole package
information in multi-tenant scenario.

Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
Message-Id: <1563154124-18579-1-git-send-email-wanpengli@tencent.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-09-16 12:32:20 +02:00
Sven Schnelle
a6deecce5b target/hppa: prevent trashing of temporary in do_depw_sar()
nullify_over() calls brcond which destroys all temporaries.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190913101714.29019-3-svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-14 15:39:24 -04:00
Sven Schnelle
4845f01518 target/hppa: prevent trashing of temporary in trans_mtctl()
nullify_over() calls brcond which destroys all temporaries.

Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190913101714.29019-2-svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-14 15:39:24 -04:00
Peter Maydell
138985c1ef MIPS queue for September 12th, 2019
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABAgAGBQJdenGiAAoJENSXKoln91plKOkIAIkLXa13c0JmZvNA4DjEOwS7
 FRDv/hdWVYALalzy+b51ppH/bZfOxe+5BZAxdMSCc84Tm9Jmqyerzp4PWkH2EeqG
 ChtUnkC2lZ6K3zAFIMIa8NhopayKbAMYV2w61J7u4Xk65xiH1M55DWjmwt70LiMW
 oUStum06paUadUUyZwNU3MTN1D9AHiezO6VQp9CCn1kvBf5u+bZSodcXsSo97YOF
 I4MLZlLuZ5sxCRvnMQfWlzykB8PDvIfH5/Dq/DXlkoJNS99vCVmNFE1dAX3NSqi8
 HQD3rkEY4TWTVD570oTtZCn+WBIgHbvbqojsmTlo3tFhWv2JEQdCKyz8+4isld8=
 =6NbW
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/amarkovic/tags/mips-queue-sep-12-2019' into staging

MIPS queue for September 12th, 2019

# gpg: Signature made Thu 12 Sep 2019 17:26:10 BST
# gpg:                using RSA key D4972A8967F75A65
# gpg: Good signature from "Aleksandar Markovic <amarkovic@wavecomp.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg:          There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 8526 FBF1 5DA3 811F 4A01  DD75 D497 2A89 67F7 5A65

* remotes/amarkovic/tags/mips-queue-sep-12-2019:
  target/mips: gdbstub: Revert commit 8e0b373
  hw/mips/mips_jazz: Remove no-longer-necessary override of do_unassigned_access
  target/mips: Switch to do_transaction_failed() hook
  hw/mips/mips_jazz: Override do_transaction_failed hook

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-13 16:04:46 +01:00
Libo Zhou
d1cc153350 target/mips: gdbstub: Revert commit 8e0b373
Multiple reports from users were received regarding failures of
packet 'g' communication with gdb for some MIPS configurations.
It was found out (by bisecting) that the problematic commit is
8e0b373. Revert that commit until a better solution is developed.

Suggested-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Libo Zhou <zhlb29@foxmail.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Message-Id: <1568207966-25202-1-git-send-email-aleksandar.markovic@rt-rk.com>
2019-09-12 18:25:34 +02:00
Peter Maydell
4f02a06d50 target/mips: Switch to do_transaction_failed() hook
Switch the MIPS target from the old unassigned_access hook to the new
do_transaction_failed hook.

Unlike the old hook, do_transaction_failed is only ever called from
the TCG memory access paths, so there is no need for the "ignore this
if we're using KVM" hack that we were previously using to work around
the way unassigned_access was called for all kinds of memory accesses
to unassigned physical addresses.

The MIPS target does not ever do direct memory reads by physical
address (via either ldl_phys etc or address_space_ldl etc), so the
only memory accesses this affects are the 'normal' guest loads and
stores, which will be handled by the new hook; their behaviour is
unchanged.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Hervé Poussineau <hpoussin@reactos.org>
Message-Id: <20190802160458.25681-3-peter.maydell@linaro.org>
2019-09-12 18:25:34 +02:00
Max Filippov
130ea8322b target/xtensa: linux-user: add call0 ABI support
Xtensa binaries built for call0 ABI don't rotate register window on
function calls and returns. Invocation of signal handlers from the
kernel is therefore different in windowed and call0 ABIs.
There's currently no way to determine xtensa ELF binary ABI from the
binary itself. Add handler for the -xtensa-abi-call0 command line
parameter/QEMU_XTENSA_ABI_CALL0 envitonment variable to the qemu-user
and record ABI choice. Use it to initialize PS.WOE in xtensa_cpu_reset.
Check PS.WOE in setup_rt_frame to determine how a signal should be
delivered.

Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Message-Id: <20190906165713.5558-1-jcmvbkbc@gmail.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2019-09-11 08:47:06 +02:00
Richard Henderson
eac2f39602 target/arm: Inline gen_bx_im into callers
There are only two remaining uses of gen_bx_im.  In each case, we
know the destination mode -- not changing in the case of gen_jmp
or changing in the case of trans_BLX_i.  Use this to simplify the
surrounding code.

For trans_BLX_i, use gen_jmp for the actual branch.  For gen_jmp,
use gen_set_pc_im to set up the single-step.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190904193059.26202-70-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-05 13:23:04 +01:00
Richard Henderson
0831403b08 target/arm: Clean up disas_thumb_insn
Now that everything is converted, remove the rest of
the legacy decode.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190904193059.26202-69-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-05 13:23:04 +01:00
Richard Henderson
67b54c554b target/arm: Convert T16, long branches
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190904193059.26202-68-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-05 13:23:04 +01:00
Richard Henderson
8d4a4dc849 target/arm: Convert T16, Unconditional branch
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190904193059.26202-67-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-05 13:23:04 +01:00
Richard Henderson
46beb58efb target/arm: Convert T16, load (literal)
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190904193059.26202-66-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-05 13:23:04 +01:00
Richard Henderson
151c2f2841 target/arm: Convert T16, shift immediate
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190904193059.26202-65-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-05 13:23:04 +01:00
Richard Henderson
43f7e42c7d target/arm: Convert T16, Miscellaneous 16-bit instructions
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190904193059.26202-64-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-05 13:23:04 +01:00
Richard Henderson
629fcaa71c target/arm: Convert T16, Conditional branches, Supervisor call
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190904193059.26202-63-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-05 13:23:04 +01:00
Richard Henderson
564b125fb9 target/arm: Convert T16, push and pop
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190904193059.26202-62-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-05 13:23:04 +01:00
Richard Henderson
279de61a21 target/arm: Split gen_nop_hint
Now that all callers pass a constant value, split the switch
statement into the individual trans_* functions.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190904193059.26202-61-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-05 13:23:04 +01:00
Richard Henderson
56e6250ede target/arm: Convert T16, nop hints
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190904193059.26202-60-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-05 13:23:04 +01:00
Richard Henderson
ae3002b021 target/arm: Convert T16, Reverse bytes
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190904193059.26202-59-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-05 13:23:04 +01:00
Richard Henderson
20556e7bd6 target/arm: Convert T16, Change processor state
Add a check for ARMv6 in trans_CPS.  We had this correct in
the T16 path, but had previously forgotten the check on the
A32 and T32 paths.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190904193059.26202-58-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-05 13:23:04 +01:00
Richard Henderson
e6f69612cc target/arm: Convert T16, extract
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190904193059.26202-57-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-05 13:23:04 +01:00
Richard Henderson
2e6a646d7b target/arm: Convert T16 adjust sp (immediate)
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190904193059.26202-56-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-05 13:23:04 +01:00
Richard Henderson
90aa042115 target/arm: Convert T16 add, compare, move (two high registers)
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190904193059.26202-55-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-05 13:23:04 +01:00
Richard Henderson
a0ef077404 target/arm: Convert T16 branch and exchange
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190904193059.26202-54-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-05 13:23:04 +01:00
Richard Henderson
6c6d237a86 target/arm: Convert T16 one low register and immediate
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190904193059.26202-53-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-05 13:23:04 +01:00
Richard Henderson
c4d3095bb6 target/arm: Convert T16 add/sub (3 low, 2 low and imm)
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190904193059.26202-52-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-05 13:23:04 +01:00