Commit Graph

7816 Commits

Author SHA1 Message Date
Philippe Mathieu-Daudé
083afd18a9 target/arm: Restrict cpu_exec_interrupt() handler to sysemu
Restrict cpu_exec_interrupt() and its callees to sysemu.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210911165434.531552-8-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-14 12:00:21 -07:00
Philippe Mathieu-Daudé
9354e6947a target/alpha: Restrict cpu_exec_interrupt() handler to sysemu
Restrict cpu_exec_interrupt() and its callees to sysemu.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210911165434.531552-7-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-14 12:00:21 -07:00
Philippe Mathieu-Daudé
120964219d accel/tcg: Rename user-mode do_interrupt hack as fake_user_interrupt
do_interrupt() is sysemu specific. However due to some X86
specific hack, it is also used in user-mode emulation, which
is why it couldn't be restricted to CONFIG_SOFTMMU (see the
comment around added in commit 7827168471: "cpu: tcg_ops:
move to tcg-cpu-ops.h, keep a pointer in CPUClass").
Keep the hack but rename the handler as fake_user_interrupt()
and restrict do_interrupt() to sysemu.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210911165434.531552-6-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-14 12:00:21 -07:00
Philippe Mathieu-Daudé
b40db05daa target/xtensa: Restrict do_transaction_failed() to sysemu
The do_transaction_failed() is restricted to system emulation since
commit cbc183d2d9 ("cpu: move cc->transaction_failed to tcg_ops").

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210911165434.531552-5-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-14 12:00:20 -07:00
Philippe Mathieu-Daudé
30ca39244b target/i386: Simplify TARGET_X86_64 #ifdef'ry
Merge two TARGET_X86_64 consecutive blocks.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210911165434.531552-4-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-14 12:00:20 -07:00
Philippe Mathieu-Daudé
7ce0886598 target/i386: Restrict sysemu-only fpu_helper helpers
Restrict some sysemu-only fpu_helper helpers (see commit
83a3d9c740: "i386: separate fpu_helper sysemu-only parts").

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210911165434.531552-3-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-14 12:00:20 -07:00
Philippe Mathieu-Daudé
d2470cf0e9 target/avr: Remove pointless use of CONFIG_USER_ONLY definition
Commit f1c671f96c ("target/avr: Introduce basic CPU class object")
added to target/avr/cpu.h:

  #ifdef CONFIG_USER_ONLY
  #error "AVR 8-bit does not support user mode"
  #endif

Remove the CONFIG_USER_ONLY definition introduced by mistake in
commit 7827168471 ("cpu: tcg_ops: move to tcg-cpu-ops.h, keep a
pointer in CPUClass").

Reported-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-By: Warner Losh <imp@bsdimp.com>
Message-Id: <20210911165434.531552-2-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-14 12:00:20 -07:00
Bin Meng
57d4941602 tcg: Remove tcg_global_reg_new defines
Since commit 1c2adb958f ("tcg: Initialize cpu_env generically"),
these tcg_global_reg_new_ macros are not used anywhere.

Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210816143507.11200-1-bmeng.cn@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-14 12:00:20 -07:00
Ilya Leoshkevich
4e116893c6 accel/tcg: Add DisasContextBase argument to translator_ld*
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
[rth: Split out of a larger patch.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-14 12:00:20 -07:00
Peter Maydell
c6f5e042d8 target-arm queue:
* mark MPS2/MPS3 board-internal i2c buses as 'full' so that command
    line user-created devices are not plugged into them
  * Take an exception if PSTATE.IL is set
  * Support an emulated ITS in the virt board
  * Add support for kudo-bmc board
  * Probe for KVM_CAP_ARM_VM_IPA_SIZE when creating scratch VM
  * cadence_uart: Fix clock handling issues that prevented
    u-boot from running
 -----BEGIN PGP SIGNATURE-----
 
 iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmE/ruQZHHBldGVyLm1h
 eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3krdD/sHLxbPua1IOA1+uxLJwRnr
 N7BZa0GVNX8+dKi3w3jtYHOyFG1u9NeOp/VI93I7G9k0vRvYT8eMN4cMWwsaG5rr
 PPjiLIFAIFwxV9QkafIONLxLYFfc6T48tstG6BYaJU2tLPwIlSZK4ZbKqrxWesAm
 mMw75AtESjYI77yQcsEXDflmcvbvM++IrqQAa190i2D8rizbbv/gqZtzJJpU2OGy
 My51t+g1SPPJvoih6edpURGmKH1vmB0UwadnOG3GFv76c9nYeVPXAtdXS+8Rs+vU
 QJpvJ0MSRc5ZztsltvXQefH4aseSHrZybpZGI0tNpZ1G2oRwZHIXEMDcZwtRHKlZ
 o5M6oeNOUZFRFrLM8FRv4ErIFhgMwWUghy+oVejCF791j1WeasDpFL+ZZTWUNYiP
 qmNdh6z7Dt7F1fxBxMiCw9PTRNB2zudyz/ZtymPGYEDj7leIpQ/HudRmaDKZ+zMG
 A8omXNEw1LFsVrTE5MjLT7tr2Eq+71V2m0OkDB+Tvmpl4AXVG9b7kCoOp6NiAXZd
 Y4Vdi5I8NN3OHK0yO1vMxOlNk7qo4BTqT7FYaSb1qaTZ/6TQtrWb7ThU989JJaQE
 28H1p8uezMDC8NsaEBa2eBsen6Uf45jYKxgUpG0jB9QuXtRY1xUdaU06fQlz4dpn
 7SyfLZbzeB0v+Bqd7z3Y9A==
 =7BH/
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20210913-3' into staging

target-arm queue:
 * mark MPS2/MPS3 board-internal i2c buses as 'full' so that command
   line user-created devices are not plugged into them
 * Take an exception if PSTATE.IL is set
 * Support an emulated ITS in the virt board
 * Add support for kudo-bmc board
 * Probe for KVM_CAP_ARM_VM_IPA_SIZE when creating scratch VM
 * cadence_uart: Fix clock handling issues that prevented
   u-boot from running

# gpg: Signature made Mon 13 Sep 2021 21:04:52 BST
# gpg:                using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg:                issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate]
# gpg:                 aka "Peter Maydell <pmaydell@gmail.com>" [ultimate]
# gpg:                 aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate]
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83  15CF 3C25 25ED 1436 0CDE

* remotes/pmaydell/tags/pull-target-arm-20210913-3: (23 commits)
  hw/arm/mps2.c: Mark internal-only I2C buses as 'full'
  hw/arm/mps2-tz.c: Mark internal-only I2C buses as 'full'
  hw/arm/mps2-tz.c: Add extra data parameter to MakeDevFn
  qdev: Support marking individual buses as 'full'
  target/arm: Merge disas_a64_insn into aarch64_tr_translate_insn
  target/arm: Take an exception if PSTATE.IL is set
  tests/data/acpi/virt: Update IORT files for ITS
  hw/arm/virt: add ITS support in virt GIC
  tests/data/acpi/virt: Add IORT files for ITS
  hw/intc: GICv3 redistributor ITS processing
  hw/intc: GICv3 ITS Feature enablement
  hw/intc: GICv3 ITS Command processing
  hw/intc: GICv3 ITS command queue framework
  hw/intc: GICv3 ITS register definitions added
  hw/intc: GICv3 ITS initial framework
  hw/arm: Add support for kudo-bmc board.
  hw/arm/virt: KVM: Probe for KVM_CAP_ARM_VM_IPA_SIZE when creating scratch VM
  hw/char: cadence_uart: Log a guest error when device is unclocked or in reset
  hw/char: cadence_uart: Ignore access when unclocked or in reset for uart_{read, write}()
  hw/char: cadence_uart: Convert to memop_with_attrs() ops
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-13 21:06:15 +01:00
Richard Henderson
bc7edccae0 target/arm: Merge disas_a64_insn into aarch64_tr_translate_insn
It is confusing to have different exits from translation
for various conditions in separate functions.

Merge disas_a64_insn into its only caller.  Standardize
on the "s" name for the DisasContext, as the code from
disas_a64_insn had more instances.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210821195958.41312-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-13 21:01:08 +01:00
Peter Maydell
520d1621de target/arm: Take an exception if PSTATE.IL is set
In v8A, the PSTATE.IL bit is set for various kinds of illegal
exception return or mode-change attempts.  We already set PSTATE.IL
(or its AArch32 equivalent CPSR.IL) in all those cases, but we
weren't implementing the part of the behaviour where attempting to
execute an instruction with PSTATE.IL takes an immediate exception
with an appropriate syndrome value.

Add a new TB flags bit tracking PSTATE.IL/CPSR.IL, and generate code
to take an exception instead of whatever the instruction would have
been.

PSTATE.IL and CPSR.IL change only on exception entry, attempted
exception exit, and various AArch32 mode changes via cpsr_write().
These places generally already rebuild the hflags, so the only place
we need an extra rebuild_hflags call is in the illegal-return
codepath of the AArch64 exception_return helper.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210821195958.41312-2-richard.henderson@linaro.org
Message-Id: <20210817162118.24319-1-peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
[rth: Added missing returns; set IL bit in syndrome]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-13 21:01:08 +01:00
Shashi Mallela
0e5c1c9a23 hw/arm/virt: add ITS support in virt GIC
Included creation of ITS as part of virt platform GIC
initialization. This Emulated ITS model now co-exists with kvm
ITS and is enabled in absence of kvm irq kernel support in a
platform.

Signed-off-by: Shashi Mallela <shashi.mallela@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20210910143951.92242-9-shashi.mallela@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-13 21:01:08 +01:00
Peter Maydell
85b4fa0cd1 linux-user: Don't include gdbstub.h in qemu.h
Currently the linux-user qemu.h pulls in gdbstub.h. There's no real reason
why it should do this; include it directly from the C files which require
it, and drop the include line in qemu.h.

(Note that several of the C files previously relying on this indirect
include were going out of their way to only include gdbstub.h conditionally
on not CONFIG_USER_ONLY!)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210908154405.15417-9-peter.maydell@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2021-09-13 20:35:45 +02:00
Marc Zyngier
d26f2f93c1 hw/arm/virt: KVM: Probe for KVM_CAP_ARM_VM_IPA_SIZE when creating scratch VM
Although we probe for the IPA limits imposed by KVM (and the hardware)
when computing the memory map, we still use the old style '0' when
creating a scratch VM in kvm_arm_create_scratch_host_vcpu().

On systems that are severely IPA challenged (such as the Apple M1),
this results in a failure as KVM cannot use the default 40bit that
'0' represents.

Instead, probe for the extension and use the reported IPA limit
if available.

Cc: Andrew Jones <drjones@redhat.com>
Cc: Eric Auger <eric.auger@redhat.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Message-id: 20210822144441.1290891-2-maz@kernel.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-13 16:07:22 +01:00
Peter Maydell
7d79344d4f * Fixes for "-cpu max" on i386 TCG (Daniel)
* vVMLOAD/VMSAVE and vGIF implementation (Lara)
 * Reorganize i386 targets documentation in preparation for SGX (myself)
 * Meson cleanups (myself, Thomas)
 * NVMM fixes (Reinoud)
 * Suppress bogus -Wstringop-overflow (Richard)
 -----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmE/PHEUHHBib256aW5p
 QHJlZGhhdC5jb20ACgkQv/vSX3jHroNn0ggAhOApMUZR2L9p4Z56X+Nnc1835dOJ
 QlX8UmMpoRBPuIKfaJPJQWwYeRSw4Nqaik3EndXug8Mo3LJaG5AFEHTXDkZGHMgh
 tGCyeARhDnUQPfKLszT1zg0EMloX6bCLFaA9ba1JBNK8VWXE4oJJLETk3Q+pDJZt
 0ztoxaLvQ2jaMFfPKtLdyhcXjDCPeZZjaQjCFVVmWV9hj8z4np3LZLoYi8a6cRWu
 u1Rb5SrftF12tu+RWACXZFQSnxFkU+iVeoKhQB0vrh7UgV/HAAbZS8c2U46v/kM0
 H6UcuBPjrz3fF/9hHNdovb4HxyQAP2pEliBSG7tFzJ+TbnMQVcoxN5uJ2Q==
 =DBxg
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/bonzini-gitlab/tags/for-upstream' into staging

* Fixes for "-cpu max" on i386 TCG (Daniel)
* vVMLOAD/VMSAVE and vGIF implementation (Lara)
* Reorganize i386 targets documentation in preparation for SGX (myself)
* Meson cleanups (myself, Thomas)
* NVMM fixes (Reinoud)
* Suppress bogus -Wstringop-overflow (Richard)

# gpg: Signature made Mon 13 Sep 2021 12:56:33 BST
# gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg:                issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
#      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83

* remotes/bonzini-gitlab/tags/for-upstream: (21 commits)
  docs: link to archived Fedora code of conduct
  Fix nvmm_ram_block_added() function arguments
  Only check CONFIG_NVMM when NEED_CPU_H is defined
  util: Suppress -Wstringop-overflow in qemu_thread_start
  fw_cfg: add etc/msr_feature_control
  meson: remove dead variable
  meson: do not use python.full_path() unnecessarily
  meson: look up cp and dtrace with find_program()
  meson.build: Do not look for VNC-related libraries if have_system is not set
  docs/system: move x86 CPU configuration to a separate document
  docs/system: standardize man page sections to --- with overline
  docs: standardize directory index to --- with overline
  docs: standardize book titles to === with overline
  target/i386: Added vVMLOAD and vVMSAVE feature
  target/i386: Added changed priority check for VIRQ
  target/i386: Added ignore TPR check in ctl_has_irq
  target/i386: Added VGIF V_IRQ masking capability
  target/i386: Moved int_ctl into CPUX86State structure
  target/i386: Added VGIF feature
  target/i386: VMRUN and VMLOAD canonicalizations
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-13 13:33:21 +01:00
Reinoud Zandijk
8d4cd3dd8b Fix nvmm_ram_block_added() function arguments
A parameter max_size was added to the RAMBlockNotifier
ram_block_added function. Use the max_size for pre allocation
of hva space.

Signed-off-by: Reinoud Zandijk <Reinoud@NetBSD.org>
Message-Id: <20210718134650.1191-3-reinoud@NetBSD.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-13 13:56:26 +02:00
Lara Lazier
52fb8ad37a target/i386: Added vVMLOAD and vVMSAVE feature
The feature allows the VMSAVE and VMLOAD instructions to execute in guest mode without
causing a VMEXIT. (APM2 15.33.1)

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-13 13:56:26 +02:00
Lara Lazier
7760bb069f target/i386: Added changed priority check for VIRQ
Writes to cr8 affect v_tpr. This could set or unset an interrupt
request as the priority might have changed.

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-13 13:56:26 +02:00
Lara Lazier
66a0201ba7 target/i386: Added ignore TPR check in ctl_has_irq
The APM2 states that if V_IGN_TPR is nonzero, the current
virtual interrupt ignores the (virtual) TPR.

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-13 13:56:26 +02:00
Lara Lazier
b67e2796a1 target/i386: Added VGIF V_IRQ masking capability
VGIF provides masking capability for when virtual interrupts
are taken. (APM2)

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-13 13:56:26 +02:00
Lara Lazier
e3126a5c92 target/i386: Moved int_ctl into CPUX86State structure
Moved int_ctl into the CPUX86State structure.  It removes some
unnecessary stores and loads, and prepares for tracking the vIRQ
state even when it is masked due to vGIF.

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-13 13:56:26 +02:00
Lara Lazier
900eeca579 target/i386: Added VGIF feature
VGIF allows STGI and CLGI to execute in guest mode and control virtual
interrupts in guest mode.
When the VGIF feature is enabled then:
 * executing STGI in the guest sets bit 9 of the VMCB offset 60h.
 * executing CLGI in the guest clears bit 9 of the VMCB offset 60h.

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210730070742.9674-1-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-13 13:56:26 +02:00
Lara Lazier
97afb47e15 target/i386: VMRUN and VMLOAD canonicalizations
APM2 requires that VMRUN and VMLOAD canonicalize (sign extend to 63
from 48/57) all base addresses in the segment registers that have been
respectively loaded.

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210804113058.45186-1-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-13 13:56:26 +02:00
Daniel P. Berrangé
69e3895f9d target/i386: add missing bits to CR4_RESERVED_MASK
Booting Fedora kernels with -cpu max hangs very early in boot. Disabling
the la57 CPUID bit fixes the problem. git bisect traced the regression to

  commit 213ff024a2 (HEAD, refs/bisect/bad)
  Author: Lara Lazier <laramglazier@gmail.com>
  Date:   Wed Jul 21 17:26:50 2021 +0200

    target/i386: Added consistency checks for CR4

    All MBZ bits in CR4 must be zero. (APM2 15.5)
    Added reserved bitmask and added checks in both
    helper_vmrun and helper_write_crN.

    Signed-off-by: Lara Lazier <laramglazier@gmail.com>
    Message-Id: <20210721152651.14683-2-laramglazier@gmail.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

In this commit CR4_RESERVED_MASK is missing CR4_LA57_MASK and
two others. Adding this lets Fedora kernels boot once again.

Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Tested-by: Richard W.M. Jones <rjones@redhat.com>
Message-Id: <20210831175033.175584-1-berrange@redhat.com>
[Removed VMXE/SMXE, matching the commit message. - Paolo]
Fixes: 213ff024a2 ("target/i386: Added consistency checks for CR4", 2021-07-22)
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-13 13:56:18 +02:00
Peter Maydell
b5328172a9 target/sparc: Drop use of gen_io_end()
The gen_io_end() function is obsolete (as documented in
docs/devel/tcg-icount.rst). Where an instruction is an I/O
operation, the translator frontend should call gen_io_start()
before generating the code which does the I/O, and then
end the TB immediately after this insn.

Remove the calls to gen_io_end() in the SPARC frontend,
and ensure that the insns which were calling it end the
TB if they didn't do so already.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210724134902.7785-2-peter.maydell@linaro.org>
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
2021-09-08 11:09:45 +01:00
Christian Borntraeger
30e398f796 s390x/cpumodel: Add more feature to gen16 default model
Add the new gen16 features to the default model and fence them for
machine version 6.1 and earlier.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20210907101017.27126-1-borntraeger@de.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2021-09-07 13:36:43 +02:00
David Hildenbrand
c35622387e hw/s390x/s390-skeys: lazy storage key enablement under TCG
Let's enable storage keys lazily under TCG, just as we do under KVM.
Only fairly old Linux versions actually make use of storage keys, so it
can be kind of wasteful to allocate quite some memory and track
changes and references if nobody cares.

We have to make sure to flush the TLB when enabling storage keys after
the VM was already running: otherwise it might happen that we don't
catch references or modifications afterwards.

Add proper documentation to all callbacks.

The kvm-unit-tests skey tests keeps on working with this change.

Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210903155514.44772-14-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2021-09-06 16:24:05 +02:00
David Hildenbrand
380ac2bcce s390x/mmu_helper: avoid setting the storage key if nothing changed
Avoid setting the key if nothing changed.

Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210903155514.44772-9-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2021-09-06 16:24:05 +02:00
David Hildenbrand
390191c6f6 s390x/mmu_helper: move address validation into mmu_translate*()
Let's move address validation into mmu_translate() and
mmu_translate_real(). This allows for checking whether an absolute
address is valid before looking up the storage key. We can now get rid of
the ram_size check.

Interestingly, we're already handling LOAD REAL ADDRESS wrong, because
a) We're not supposed to touch storage keys
b) We're not supposed to convert to an absolute address

Let's use a fake, negative MMUAccessType to teach mmu_translate() to
fix that handling and to not perform address validation.

Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210903155514.44772-8-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2021-09-06 16:24:05 +02:00
David Hildenbrand
e0b11f2df1 s390x/mmu_helper: fixup mmu_translate() documentation
Looks like we forgot to adjust documentation of one parameter.

Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210903155514.44772-7-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2021-09-06 16:24:05 +02:00
David Hildenbrand
e039992f9a s390x/mmu_helper: no need to pass access type to mmu_translate_asce()
The access type is unused since commit 81d7e3bc45 ("s390x/mmu: Inject
DAT exceptions from a single place").

Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210903155514.44772-6-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2021-09-06 16:24:05 +02:00
David Hildenbrand
eaa0feea75 s390x/tcg: check for addressing exceptions for RRBE, SSKE and ISKE
Let's replace the ram_size check by a proper physical address space
check (for example, to prepare for memory hotplug), trigger addressing
exceptions and trace the return value of the storage key getter/setter.

Provide an helper mmu_absolute_addr_valid() to be used in other context
soon. Always test for "read" instead of "write" as we are not actually
modifying the page itself.

Signed-off-by: David Hildenbrand <david@redhat.com>
Acked-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210903155514.44772-5-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2021-09-06 16:24:05 +02:00
David Hildenbrand
06d8a10a70 s390x/tcg: convert real to absolute address for RRBE, SSKE and ISKE
For RRBE, SSKE, and ISKE, we're dealing with real addresses, so we have to
convert to an absolute address first.

In the future, when adding EDAT1 support, we'll have to pay attention to
SSKE handling, as we'll be dealing with absolute addresses when the
multiple-block control is one.

Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210903155514.44772-4-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2021-09-06 16:24:05 +02:00
David Hildenbrand
fe00c705fe s390x/tcg: fix ignoring bit 63 when setting the storage key in SSKE
Right now we could set an 8-bit storage key via SSKE and retrieve it
again via ISKE, which is against the architecture description:

SSKE:
"
The new seven-bit storage-key value, or selected bits
thereof, is obtained from bit positions 56-62 of gen-
eral register R 1 . The contents of bit positions 0-55
and 63 of the register are ignored.
"

ISKE:
"
The seven-bit storage key is inserted in bit positions
56-62 of general register R 1 , and bit 63 is set to zero.
"

Let's properly ignore bit 63 to create the correct seven-bit storage key.

Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210903155514.44772-3-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2021-09-06 16:24:05 +02:00
David Hildenbrand
634a0b51cb s390x/tcg: wrap address for RRBE
Let's wrap the address just like for SSKE and ISKE.

Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210903155514.44772-2-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2021-09-06 16:24:04 +02:00
David Hildenbrand
0dd05d0606 s390x/ioinst: Fix wrong MSCH alignment check on little endian
schib->pmcw.chars is 32bit, not 16bit. This fixes the kvm-unit-tests
"css" test, which fails with:

  FAIL: Channel Subsystem: measurement block format1: Unaligned MB origin:
  Program interrupt: expected(21) == received(0)

Because we end up not injecting an operand program exception.

Fixes: a54b8ac340 ("css: SCHIB measurement block origin must be aligned")
Cc: Halil Pasic <pasic@linux.ibm.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Thomas Huth <thuth@redhat.com>
Cc: Pierre Morel <pmorel@linux.ibm.com>
Cc: qemu-s390x@nongnu.org
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Halil Pasic <pasic@linux.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Pierre Morel <pmorel@linux.ibm.com>
Message-Id: <20210805143753.86520-1-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2021-09-06 16:23:22 +02:00
David Hildenbrand
6b01606f0e s390x/tcg: fix and optimize SPX (SET PREFIX)
We not only invalidate the translation of the range 0x0-0x2000, we also
invalidate the translation of the new prefix range and the translation
of the old prefix range -- because real2abs would return different
results for all of these ranges when changing the prefix location.

This fixes the kvm-unit-tests "edat" test that just hangs before this
patch because we end up clearing the new prefix area instead of the old
prefix area.

While at it, let's not do anything in case the prefix doesn't change.

Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: David Hildenbrand <david@redhat.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Thomas Huth <thuth@redhat.com>
Cc: Claudio Imbrenda <imbrenda@linux.ibm.com>
Cc: qemu-s390x@nongnu.org
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Message-Id: <20210805125938.74034-1-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2021-09-06 16:23:16 +02:00
Shuuichirou Ishii
e31c70ac04 target-arm: Add support for Fujitsu A64FX
Add a definition for the Fujitsu A64FX processor.

The A64FX processor does not implement the AArch32 Execution state,
so there are no associated AArch32 Identification registers.

For SVE, the A64FX processor supports only 128,256 and 512bit vector
lengths.

The Identification register values are defined based on the FX700,
and have been tested and confirmed.

Signed-off-by: Shuuichirou Ishii <ishii.shuuichir@fujitsu.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-01 11:08:18 +01:00
Peter Maydell
d4cc1c2196 target/arm: Enable MVE in Cortex-M55
We now have a complete MVE emulation, so we can enable it in our
Cortex-M55 model by setting the ID registers to match those of a
Cortex-M55 with full MVE support.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01 11:08:18 +01:00
Peter Maydell
98e40fbd79 target/arm: Implement MVE VRINT insns
Implement the MVE VRINT insns, which round floating point inputs
to integer values, leaving them in floating point format.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01 11:08:17 +01:00
Peter Maydell
73d260db3c target/arm: Implement MVE VCVT between single and half precision
Implement the MVE VCVT instruction which converts between single
and half precision floating point.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01 11:08:17 +01:00
Peter Maydell
53fc5f6139 target/arm: Implement MVE VCVT with specified rounding mode
Implement the MVE VCVT which converts from floating-point to integer
using a rounding mode specified by the instruction.  We implement
this similarly to the Neon equivalents, by passing the required
rounding mode as an extra integer parameter to the helper functions.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01 11:08:17 +01:00
Peter Maydell
2ec0dcf034 target/arm: Implement MVE VCVT between fp and integer
Implement the MVE "VCVT (between floating-point and integer)" insn.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01 11:08:17 +01:00
Peter Maydell
2a4b939cf8 target/arm: Implement MVE VCVT between floating and fixed point
Implement the MVE VCVT insns which convert between floating and fixed
point.  As with the Neon equivalents, these use essentially the same
constant encoding as right-shift-by-immediate.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01 11:08:17 +01:00
Peter Maydell
c2d8f6bb28 target/arm: Implement MVE fp scalar comparisons
Implement the MVE fp scalar comparisons VCMP and VPT.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01 11:08:17 +01:00
Peter Maydell
c87fe6d28c target/arm: Implement MVE fp vector comparisons
Implement the MVE fp vector comparisons VCMP and VPT.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01 11:08:17 +01:00
Peter Maydell
29f80e7d83 target/arm: Implement MVE FP max/min across vector
Implement the MVE VMAXNMV, VMINNMV, VMAXNMAV, VMINNMAV insns.  These
calculate the maximum or minimum of floating point elements across a
vector, starting with a value in a general purpose register and
returning the result there.

The pseudocode silences a possible SNaN in the accumulating result
on every iteration (by calling FPConvertNaN), but we do it only
on the input ra, because if none of the inputs to float*_maxnum
or float*_minnum are SNaNs then the result can't be an SNaN.

Note that we can't use the float*_maxnuma() etc functions we defined
earlier for VMAXNMA and VMINNMA, because we mustn't take the absolute
value of the starting general-purpose register value, which could be
negative.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01 11:08:17 +01:00
Peter Maydell
4773e74e5f target/arm: Implement MVE fp-with-scalar VFMA, VFMAS
Implement the MVE fp-with-scalar VFMA and VFMAS insns.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01 11:08:17 +01:00
Peter Maydell
abfe39b263 target/arm: Implement MVE scalar fp insns
Implement the MVE scalar floating point insns VADD, VSUB and VMUL.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01 11:08:16 +01:00
Peter Maydell
90257a4f35 target/arm: Implement MVE VMAXNMA and VMINNMA
Implement the MVE VMAXNMA and VMINNMA insns; these are 2-operand, but
the destination register must be the same as one of the source
registers.

We defer the decode of the size in bit 28 to the individual insn
patterns rather than doing it in the format, because otherwise we
would have a single insn pattern that overlapped with two groups (eg
VMAXNMA with the VMULH_S and VMULH_U groups). Having two insn
patterns per insn seems clearer than a complex multilevel nesting
of overlapping and non-overlapping groups.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-01 11:08:16 +01:00
Peter Maydell
d3cd965c84 target/arm: Implement MVE VCMUL and VCMLA
Implement the MVE VCMUL and VCMLA insns.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-01 11:08:16 +01:00
Peter Maydell
3173c0dd93 target/arm: Implement MVE VFMA and VFMS
Implement the MVE VFMA and VFMS insns.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-01 11:08:16 +01:00
Peter Maydell
104afc68cf target/arm: Implement MVE VCADD
Implement the MVE VCADD insn.  Note that here the size bit is the
opposite sense to the other 2-operand fp insns.

We don't check for the sz == 1 && Qd == Qm UNPREDICTABLE case,
because that would mean we can't use the DO_2OP_FP macro in
translate-mve.c.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-01 11:08:16 +01:00
Peter Maydell
82af0153d3 target/arm: Implement MVE VSUB, VMUL, VABD, VMAXNM, VMINNM
Implement more simple 2-operand floating point MVE insns.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-01 11:08:16 +01:00
Peter Maydell
1e35cd9166 target/arm: Implement MVE VADD (floating-point)
Implement the MVE VADD (floating-point) insn.  Handling of this is
similar to the 2-operand integer insns, except that we must take care
to only update the floating point exception status if the least
significant bit of the predicate mask for each element is active.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-01 11:08:16 +01:00
Richard Henderson
8e034ae44d target/riscv: Use {get,dest}_gpr for RVV
Remove gen_get_gpr, as the function becomes unused.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210823195529.560295-25-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
Richard Henderson
f33960df5b target/riscv: Tidy trans_rvh.c.inc
Exit early if check_access fails.
Split out do_hlv, do_hsv, do_hlvx subroutines.
Use dest_gpr, get_gpr in the new subroutines.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210823195529.560295-24-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
Richard Henderson
7976837f9a target/riscv: Use {get,dest}_gpr for RVD
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210823195529.560295-23-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
Richard Henderson
75234a2843 target/riscv: Use {get,dest}_gpr for RVF
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210823195529.560295-22-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
Richard Henderson
6922eee6ac target/riscv: Use gen_shift_imm_fn for slli_uw
Always use tcg_gen_deposit_z_tl; the special case for
shamt >= 32 is handled there.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210823195529.560295-21-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
Richard Henderson
cce762a75e target/riscv: Use {get,dest}_gpr for RVA
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210823195529.560295-20-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
Richard Henderson
a974879b45 target/riscv: Reorg csr instructions
Introduce csrr and csrw helpers, for read-only and write-only insns.

Note that we do not properly implement this in riscv_csrrw, in that
we cannot distinguish true read-only (rs1 == 0) from any other zero
write_mask another source register -- this should still raise an
exception for read-only registers.

Only issue gen_io_start for CF_USE_ICOUNT.
Use ctx->zero for csrrc.
Use get_gpr and dest_gpr.

Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210823195529.560295-19-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
Richard Henderson
377cbb4bdb target/riscv: Fix hgeie, hgeip
We failed to write into *val for these read functions;
replace them with read_zero.  Only warn about unsupported
non-zero value when writing a non-zero value.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210823195529.560295-18-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
Richard Henderson
33979526ca target/riscv: Fix rmw_sip, rmw_vsip, rmw_hsip vs write-only operation
We distinguish write-only by passing ret_value as NULL.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210823195529.560295-17-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
Richard Henderson
6ecf39e2dd target/riscv: Use {get, dest}_gpr for integer load/store
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210823195529.560295-16-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
Richard Henderson
9b21b64345 target/riscv: Use get_gpr in branches
Narrow the scope of t0 in trans_jalr.

Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210823195529.560295-15-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
Richard Henderson
23c1088689 target/riscv: Use extracts for sraiw and srliw
These operations can be done in one instruction on some hosts.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210823195529.560295-14-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
Richard Henderson
89c883091f target/riscv: Use DisasExtend in shift operations
These operations are greatly simplified by ctx->w, which allows
us to fold gen_shiftw into gen_shift.  Split gen_shifti into
gen_shift_imm_{fn,tl} like we do for gen_arith_imm_{fn,tl}.

Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210823195529.560295-13-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
Richard Henderson
6090391505 target/riscv: Add DisasExtend to gen_unary
Use ctx->w for ctpopw, which is the only one that can
re-use the generic algorithm for the narrow operation.

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210823195529.560295-12-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
Richard Henderson
f84ed8c2df target/riscv: Move gen_* helpers for RVB
Move these helpers near their use by the trans_*
functions within insn_trans/trans_rvb.c.inc.

Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210823195529.560295-11-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
Richard Henderson
b66a0585f0 target/riscv: Move gen_* helpers for RVM
Move these helpers near their use by the trans_*
functions within insn_trans/trans_rvm.c.inc.

Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210823195529.560295-10-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
Richard Henderson
8a1b4917c5 target/riscv: Use gen_arith for mulh and mulhu
Split out gen_mulh and gen_mulhu and use the common helper.

Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210823195529.560295-9-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
Richard Henderson
afbbec8201 target/riscv: Remove gen_arith_div*
Use ctx->w and the enhanced gen_arith function.

Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210823195529.560295-8-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
Richard Henderson
191d1dafae target/riscv: Add DisasExtend to gen_arith*
Most arithmetic does not require extending the inputs.
Exceptions include division, comparison and minmax.

Begin using ctx->w, which allows elimination of gen_addw,
gen_subw, gen_mulw.

Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210823195529.560295-7-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
Richard Henderson
ecda15d137 target/riscv: Introduce DisasExtend and new helpers
Introduce get_gpr, dest_gpr, temp_new -- new helpers that do not force
tcg globals into temps, returning a constant 0 for $zero as source and
a new temp for $zero as destination.

Introduce ctx->w for simplifying word operations, such as addw.

Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210823195529.560295-6-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
Richard Henderson
867c81968a target/riscv: Add DisasContext to gen_get_gpr, gen_set_gpr
We will require the context to handle RV64 word operations.

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210823195529.560295-5-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
Richard Henderson
4a083b563a target/riscv: Clean up division helpers
Utilize the condition in the movcond more; this allows some of
the setcond that were feeding into movcond to be removed.
Do not write into source1 and source2.  Re-name "condN" to "tempN"
and use the temporaries for more than holding conditions.

Tested-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210823195529.560295-4-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
Richard Henderson
05b80ed0a1 target/riscv: Use tcg_constant_*
Replace uses of tcg_const_* with the allocate and free close together.

Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210823195529.560295-2-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
LIU Zhiwei
42109837b5 target/riscv: Add User CSRs read-only check
For U-mode CSRs, read-only check is also needed.

Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Message-id: 20210810014552.4884-1-zhiwei_liu@c-sky.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
LIU Zhiwei
a8b37120d4 target/riscv: Don't wrongly override isa version
For some cpu, the isa version has already been set in cpu init function.
Thus only override the isa version when isa version is not set, or
users set different isa version explicitly by cpu parameters.

Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Message-id: 20210811144612.68674-1-zhiwei_liu@c-sky.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
Bin Meng
65e728a28a target/riscv: Correct a comment in riscv_csrrw()
When privilege check fails, RISCV_EXCP_ILLEGAL_INST is returned,
not -1 (RISCV_EXCP_NONE).

Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210807141025.31808-1-bmeng.cn@gmail.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
Peter Maydell
ad22d05833 ppc patch queue 2021-08-27
First ppc pull request for qemu-6.2.  As usual, there's a fair bit
 here, since it's been queued during the 6.1 freeze.  Highlights are:
 
  * Some fixes for 128 bit arithmetic and some vector opcodes that use
    them
  * Significant improvements to the powernv to support POWER10 cpus
    (more to come though)
  * Several cleanups to the ppc softmmu code
  * A few other assorted fixes
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAmEoj5gACgkQbDjKyiDZ
 s5JFPw/+JOmi1G6eY3u/kYJ8TJhe65s6TJDQhGQiQSBoBShRBJ1+bro3fPGA8pkT
 48NAb9RnTnLqys+vhScF7qt2wIxXJFVoVyMhAj2Xv11VQzDPpbLGg6+2Qt7WFraQ
 zyeEKBQQTV29RtV7UBUEmx4ZGmnoc0cmzl3QGO3Jq17ucOHNTSW19QpxU60wClU1
 PZIUDoWdt7FBS8lvj/55736H3z6ZRnBqZtW9m64ln+CBQuuKo5UkAkaooaJhEFJx
 OUZYeo+zky8YaYSWwTFGIxBYhwptnAWCsqkzeJUxPw1ICAzwj/kQX7ckVhbgTpbE
 CADpgkATXTbQzLFipzxJ45UMP0yMsk5IOPZ6FS9G+JfsP2T92RMwy7XhqPfWCoov
 WKqX/xpmGTnJONuQ7SO/bWUyPH4K7hYgSPPlLAcwDYCg4szWRIbTCs9Yr9rzAPhk
 KqKUGLb7D7Rbi1ulSC2ieqsTqVmp6plfnjxR2gPcbp0FltqGln6tVZEHEyPjTEv0
 5b7w+3AHDwh9a4NyzULaxxBKktNU1KXKe74/U86qhJtx4kXFSkAhoeztcR30zmUX
 W1xjb5eoRgFbHnoDTCtDYAUwuz2w1/I2OLA5kfnSQnRQS0YiqUeicbBkW6iIE61z
 oM86ZwEQX1lyf7agECRgpfdcPa6uyAQ72QUR5wgvXDW59PSNNxk=
 =C5XY
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/dg-gitlab/tags/ppc-for-6.2-20210827' into staging

ppc patch queue 2021-08-27

First ppc pull request for qemu-6.2.  As usual, there's a fair bit
here, since it's been queued during the 6.1 freeze.  Highlights are:

 * Some fixes for 128 bit arithmetic and some vector opcodes that use
   them
 * Significant improvements to the powernv to support POWER10 cpus
   (more to come though)
 * Several cleanups to the ppc softmmu code
 * A few other assorted fixes

# gpg: Signature made Fri 27 Aug 2021 08:09:12 BST
# gpg:                using RSA key 75F46586AE61A66CC44E87DC6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>" [full]
# gpg:                 aka "David Gibson (Red Hat) <dgibson@redhat.com>" [full]
# gpg:                 aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>" [full]
# gpg:                 aka "David Gibson (kernel.org) <dwg@kernel.org>" [unknown]
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E  87DC 6C38 CACA 20D9 B392

* remotes/dg-gitlab/tags/ppc-for-6.2-20210827:
  target/ppc: fix vector registers access in gdbstub for little-endian
  include/qemu/int128.h: introduce bswap128s
  target/ppc: fix vextu[bhw][lr]x helpers
  include/qemu/int128.h: define struct Int128 according to the host endianness
  ppc/xive: Export xive_presenter_notify()
  ppc/xive: Export PQ get/set routines
  ppc/pnv: add a chip topology index for POWER10
  ppc/pnv: Distribute RAM among the chips
  ppc/pnv: Use a simple incrementing index for the chip-id
  ppc/pnv: powerpc_excp: Do not discard HDECR exception when entering power-saving mode
  ppc/pnv: Change the POWER10 machine to support DD2 only
  ppc: Add a POWER10 DD2 CPU
  ppc/pnv: update skiboot to commit 820d43c0a775.
  target/ppc: moved store_40x_sler to helper_regs.c
  target/ppc: moved ppc_store_sdr1 to mmu_common.c
  target/ppc: divided mmu_helper.c in 2 files
  spapr_pci: Fix leak in spapr_phb_vfio_get_loc_code() with g_autofree
  xive: Remove extra '0x' prefix in trace events

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-27 11:34:12 +01:00
Peter Maydell
0289f62335 Error reporting patches for 2021-08-26
-----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEENUvIs9frKmtoZ05fOHC0AOuRhlMFAmEnsHESHGFybWJydUBy
 ZWRoYXQuY29tAAoJEDhwtADrkYZTFAAP/0zO4CPElnMRjNEZcUaEldrW3aaOzB9b
 bcBIbZIe8VzM7elQIbvSYRjHDcMIFfLzSz3N1YmRbdbO5xUJ4bTJstVarrcdCo/X
 0DUjF1gDR8w+C2sc/1Bg8mbkY0tgC+GBv4QbfU7uZXEr4FgDMxmPXRvv67rOqdCf
 Cd6AXK0Q0fMcNO//s/RaWosBdEu5kzR7RXvkmLbpBBIO69Jed1yRslfNxKoVhM/P
 v4cuhMXGxzmBVJizj4rASvJZvtqJJOVRVf+pbOsnPqxKIDUyh/LXz7eWWBINYf7i
 /CejSCGyZDQBOPMT3FmC4k6Q2GoYmTd3nlSfp9+oI494ciwHv/s6dGCA5rTgIohw
 I0GnT030osNWQvXNtIeiAzVBKSVjZtYgpdxe+kzkWw4HcueZLS/lPUC64cta4zoA
 DaHDTXFoTDtAkLqIfRUdpyCvtwvfc8f7EUW+qZMoHQ+vVLpAxy5JPEEwlKqo9m7E
 BB3ih8Dl13Kw9irU6JLaD1qGr/wHlgYHwJ2iA1C33M31+7viA9bPL7kgOoK7odkC
 aPBYcV+huzpk8o6UYj4Xh1a4su09YBqywCuJQLXvoS0SEHef6GXDRunMa0aNSICc
 G5p0gFn4gKlO9orOsfoOBPa6JRCcypluOkPVMVFI2PVYCx2+tFFt+d9fVeXh2vGT
 Nf8yLL/ir4FX
 =ZVbi
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/armbru/tags/pull-error-2021-08-26' into staging

Error reporting patches for 2021-08-26

# gpg: Signature made Thu 26 Aug 2021 16:17:05 BST
# gpg:                using RSA key 354BC8B3D7EB2A6B68674E5F3870B400EB918653
# gpg:                issuer "armbru@redhat.com"
# gpg: Good signature from "Markus Armbruster <armbru@redhat.com>" [full]
# gpg:                 aka "Markus Armbruster <armbru@pond.sub.org>" [full]
# Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867  4E5F 3870 B400 EB91 8653

* remotes/armbru/tags/pull-error-2021-08-26:
  vl: Clean up -smp error handling
  Remove superfluous ERRP_GUARD()
  vhost: Clean up how VhostOpts method vhost_backend_init() fails
  vhost: Clean up how VhostOpts method vhost_get_config() fails
  microvm: Drop dead error handling in microvm_machine_state_init()
  migration: Handle migration_incoming_setup() errors consistently
  migration: Unify failure check for migrate_add_blocker()
  whpx nvmm: Drop useless migrate_del_blocker()
  vfio: Avoid error_propagate() after migrate_add_blocker()
  i386: Never free migration blocker objects instead of sometimes
  vhost-scsi: Plug memory leak on migrate_add_blocker() failure
  multi-process: Fix pci_proxy_dev_realize() error handling
  spapr: Explain purpose of ->fwnmi_migration_blocker more clearly
  spapr: Plug memory leak when we can't add a migration blocker
  error: Use error_fatal to simplify obvious fatal errors (again)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-27 09:57:28 +01:00
Matheus Ferst
0ff16b6b78 target/ppc: fix vector registers access in gdbstub for little-endian
As vector registers are stored in host endianness, we shouldn't swap its
64-bit elements in user mode. Add a 16-byte case in
ppc_maybe_bswap_register to handle the reordering of elements in softmmu
and remove avr_need_swap which is now unused.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20210826145656.2507213-3-matheus.ferst@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-08-27 12:43:13 +10:00
Matheus Ferst
f297c4c605 target/ppc: fix vextu[bhw][lr]x helpers
These helpers shouldn't depend on the host endianness, as they only use
shifts, ands, and int128_* methods.

Fixes: 60caf2216b ("target-ppc: add vextu[bhw][lr]x instructions")
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20210826141446.2488609-3-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-08-27 12:41:47 +10:00
Cédric Le Goater
c944a3ba7b ppc/pnv: powerpc_excp: Do not discard HDECR exception when entering power-saving mode
The Hypervisor Decrementer exception should not be generated while the
CPU is in power-saving mode (see cpu_ppc_hdecr_excp()). However,
discarding the exception before entering the power-saving mode is
wrong since we would loose a previously generated HDEC.

Fixes: 4b236b621b ("ppc: Initial HDEC support")
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20210809134547.689560-4-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-08-27 12:41:13 +10:00
Cédric Le Goater
363fd548ab ppc: Add a POWER10 DD2 CPU
The POWER10 DD2 CPU adds an extra LPCR[HAIL] bit. DD1 doesn't have
HAIL but since it does not break the modeling and that we don't plan
to support DD1, modify the LPCR mask of all the POWER10 family.

Setting the HAIL bit is a requirement to support the scv instruction
on PowerNV POWER10 platforms since glibc-2.33.

Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20210809134547.689560-2-clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-08-27 12:41:13 +10:00
Lucas Mateus Castro (alqotel)
c06ba89293 target/ppc: moved store_40x_sler to helper_regs.c
moved store_40x_sler from mmu_common.c to helper_regs.c as it is
a function to store a value in a special purpose register, so
moving it to a file focused in special register manipulation
is more appropriate.

Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Message-Id: <20210723175627.72847-4-lucas.araujo@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-08-27 12:41:13 +10:00
Lucas Mateus Castro (alqotel)
d6ae8ec6ef target/ppc: moved ppc_store_sdr1 to mmu_common.c
ppc_store_sdr1 was at first in mmu_helper.c and was moved as part
the patches to enable the disable-tcg option, now it's being moved
back to a file that will be compiled with that option

Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Message-Id: <20210723175627.72847-3-lucas.araujo@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-08-27 12:41:13 +10:00
Lucas Mateus Castro (alqotel)
5118ebe839 target/ppc: divided mmu_helper.c in 2 files
Divided mmu_helper.c in 2 files, functions inside #ifdef CONFIG_SOFTMMU
stayed in mmu_helper.c, other functions moved to mmu_common.c. Updated
meson.build to compile mmu_common.c and only compile mmu_helper.c when
CONFIG_TCG is set.
Moved function declarations, #define and structs used by both files to
internal.h except for functions that use structures defined in cpu.h,
those were moved to cpu.h.

Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Message-Id: <20210723175627.72847-2-lucas.araujo@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-08-27 12:41:13 +10:00
Peter Maydell
e784807cd2 target/arm: Do hflags rebuild in cpsr_write()
Currently we rely on all the callsites of cpsr_write() to rebuild the
cached hflags if they change one of the CPSR bits which we use as a
TB flag and cache in hflags.  This is a bit awkward when we want to
change the set of CPSR bits that we cache, because it means we need
to re-audit all the cpsr_write() callsites to see which flags they
are writing and whether they now need to rebuild the hflags.

Switch instead to making cpsr_write() call arm_rebuild_hflags()
itself if one of the bits being changed is a cached bit.

We don't do the rebuild for the CPSRWriteRaw write type, because that
kind of write is generally doing something special anyway.  For the
CPSRWriteRaw callsites in the KVM code and inbound migration we
definitely don't want to recalculate the hflags; the callsites in
boot.c and arm-powerctl.c have to do a rebuild-hflags call themselves
anyway because of other CPU state changes they make.

This allows us to drop explicit arm_rebuild_hflags() calls in a
couple of places where the only reason we needed to call it was the
CPSR write.

This fixes a bug where we were incorrectly failing to rebuild hflags
in the code path for a gdbstub write to CPSR, which meant that you
could make QEMU assert by breaking into a running guest, altering the
CPSR to change the value of, for example, CPSR.E, and then
continuing.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210817201843.3829-1-peter.maydell@linaro.org
2021-08-26 17:02:01 +01:00
Peter Maydell
8e228c9e4b target/arm: Implement HSTR.TJDBX
In v7A, the HSTR register has a TJDBX bit which traps NS EL0/EL1
access to the JOSCR and JMCR trivial Jazelle registers, and also BXJ.
Implement these traps. In v8A this HSTR bit doesn't exist, so don't
trap for v8A CPUs.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210816180305.20137-3-peter.maydell@linaro.org
2021-08-26 17:02:01 +01:00
Peter Maydell
cc7613bfaa target/arm: Implement HSTR.TTEE
In v7, the HSTR register has a TTEE bit which allows EL0/EL1 accesses
to the Thumb2EE TEECR and TEEHBR registers to be trapped to the
hypervisor. Implement these traps.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210816180305.20137-2-peter.maydell@linaro.org
2021-08-26 17:02:01 +01:00
Peter Maydell
49e7f191ca target/arm: Avoid assertion trying to use KVM and multiple ASes
KVM cannot support multiple address spaces per CPU; if you try to
create more than one then cpu_address_space_init() will assert.

In the Arm CPU realize function, detect the configurations which
would cause us to need more than one AS, and cleanly fail the
realize rather than blundering on into the assertion. This
turns this:
  $ qemu-system-aarch64  -enable-kvm -display none -cpu max -machine raspi3b
  qemu-system-aarch64: ../../softmmu/physmem.c:747: cpu_address_space_init: Assertion `asidx == 0 || !kvm_enabled()' failed.
  Aborted

into:
  $ qemu-system-aarch64  -enable-kvm -display none -machine raspi3b
  qemu-system-aarch64: Cannot enable KVM when guest CPU has EL3 enabled

and this:
  $ qemu-system-aarch64  -enable-kvm -display none -machine mps3-an524
  qemu-system-aarch64: ../../softmmu/physmem.c:747: cpu_address_space_init: Assertion `asidx == 0 || !kvm_enabled()' failed.
  Aborted

into:
  $ qemu-system-aarch64  -enable-kvm -display none -machine mps3-an524
  qemu-system-aarch64: Cannot enable KVM when using an M-profile guest CPU

Fixes: https://gitlab.com/qemu-project/qemu/-/issues/528
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20210816135842.25302-3-peter.maydell@linaro.org
2021-08-26 17:02:01 +01:00
Peter Maydell
7f4c520dac arch_init.h: Don't include arch_init.h unnecessarily
arch_init.h only defines the QEMU_ARCH_* enumeration and the
arch_type global. Don't include it in files that don't use those.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210730105947.28215-8-peter.maydell@linaro.org
2021-08-26 17:02:00 +01:00
Andrew Jones
022707e5d6 target/arm/cpu64: Validate sve vector lengths are supported
Future CPU types may specify which vector lengths are supported.
We can apply nearly the same logic to validate those lengths
as we do for KVM's supported vector lengths. We merge the code
where we can, but unfortunately can't completely merge it because
KVM requires all vector lengths, power-of-two or not, smaller than
the maximum enabled length to also be enabled. The architecture
only requires all the power-of-two lengths, though, so TCG will
only enforce that.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210823160647.34028-5-drjones@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-26 17:01:59 +01:00
Andrew Jones
5b65e5abea target/arm/cpu64: Replace kvm_supported with sve_vq_supported
Now that we have an ARMCPU member sve_vq_supported we no longer
need the local kvm_supported bitmap for KVM's supported vector
lengths.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210823160647.34028-4-drjones@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-26 17:01:59 +01:00
Andrew Jones
927703cc40 target/arm/kvm64: Ensure sve vls map is completely clear
bitmap_clear() only clears the given range. While the given
range should be sufficient in this case we might as well be
100% sure all bits are zeroed by using bitmap_zero().

Signed-off-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210823160647.34028-3-drjones@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-26 17:01:59 +01:00
Andrew Jones
5401b1e08d target/arm/cpu: Introduce sve_vq_supported bitmap
Allow CPUs that support SVE to specify which SVE vector lengths they
support by setting them in this bitmap. Currently only the 'max' and
'host' CPU types supports SVE and 'host' requires KVM which obtains
its supported bitmap from the host. So, we only need to initialize the
bitmap for 'max' with TCG. And, since 'max' should support all SVE
vector lengths we simply fill the bitmap. Future CPU types may have
less trivial maps though.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210823160647.34028-2-drjones@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-26 17:01:59 +01:00
Markus Armbruster
436c831a28 migration: Unify failure check for migrate_add_blocker()
Most callers check the return value.  Some check whether it set an
error.  Functionally equivalent, but the former tends to be easier on
the eyes, so do that everywhere.

Prior art: commit c6ecec43b2 "qemu-option: Check return value instead
of @err where convenient".

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20210720125408.387910-10-armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
2021-08-26 17:15:28 +02:00
Markus Armbruster
650126f838 whpx nvmm: Drop useless migrate_del_blocker()
There is nothing to delete after migrate_add_blocker() failed.  Trying
anyway is safe, but useless.  Don't.

Cc: Sunil Muthuswamy <sunilmut@microsoft.com>
Cc: Kamil Rytarowski <kamil@netbsd.org>
Cc: Reinoud Zandijk <reinoud@netbsd.org>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20210720125408.387910-9-armbru@redhat.com>
Reviewed-by: Reinoud Zandijk <reinoud@NetBSD.org>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
2021-08-26 17:15:28 +02:00
Markus Armbruster
a5c051b2cf i386: Never free migration blocker objects instead of sometimes
invtsc_mig_blocker has static storage duration.  When a CPU with
certain features is initialized, and invtsc_mig_blocker is still null,
we add a migration blocker and store it in invtsc_mig_blocker.

The object is freed when migrate_add_blocker() fails, leaving
invtsc_mig_blocker dangling.  It is not freed on later failures.

Same for hv_passthrough_mig_blocker and hv_no_nonarch_cs_mig_blocker.

All failures are actually fatal, so whether we free or not doesn't
really matter, except as bad examples to be copied / imitated.

Clean this up in a minimal way: never free these blocker objects.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20210720125408.387910-7-armbru@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
2021-08-26 17:15:28 +02:00
Markus Armbruster
f9734d5d40 error: Use error_fatal to simplify obvious fatal errors (again)
We did this with scripts/coccinelle/use-error_fatal.cocci before, in
commit 50beeb6809 and 007b06578a.  This commit cleans up rarer
variations that don't seem worth matching with Coccinelle.

Cc: Thomas Huth <thuth@redhat.com>
Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Juan Quintela <quintela@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20210720125408.387910-2-armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
2021-08-26 17:15:28 +02:00
Peter Maydell
0a9be95545 x86 queue, 2021-08-25
Bug fixes:
 * Remove split lock detect in Snowridge CPU model (Chenyi Qiang)
 * Remove AVX_VNNI feature from Cooperlake cpu model (Yang Zhong)
 -----BEGIN PGP SIGNATURE-----
 
 iQJIBAABCAAyFiEEWjIv1avE09usz9GqKAeTb5hNxaYFAmEmn9cUHGVoYWJrb3N0
 QHJlZGhhdC5jb20ACgkQKAeTb5hNxabAcxAAjLAgC83f97tLLoJZcfXnUB7yDb/X
 XvX+L/wnFXjBN5gakEN0uIZ83wpYVqjg9PWhLvUUIy0zJvPDfDXRPKY2ct2CY6wU
 sdEPmC6hzxzSuVmHzPS0i1Pl3kOOuN9APBqFvO86kqQCSH4dV8b+3BwVawZjZfxx
 Z00dzxVoQYcCw8Mwnx2vKZNjoZhJ5ebsKbCliKiIj5ozG7ci8TKBiZFZdKlfgb8s
 LKTnPdtjFw/7P99PAhidBqq3rIJOlYrONcVa3XLZX1+nPpdNx2CEeULAS5Cu5o6Z
 jMEB6dQV6zxtioC0dd3/3ZIdy0hxfKgugIKVyVNhzEQcR1iModOqpsnTAiALmu8d
 NBUo0W2ZJbvAJ7xPIJPrfdhSHvOQT4iydxq4vHBp+VJKdGZHgi7rfO1WrtFT+4gM
 etREuQeheEW6DCqoOKxprF6UHoudFkUSOzanOzTQkAdaGzop8O7DdL1MEqRlrJPP
 BoGCxPJ8MtI1nT+G3UH+4HsuE3o0anWk8Ikm83ZHCz4E3whqrvEUAcgZ1EBMvIoM
 e7o0HUi6z81mWQ1Mg1+LLrd93ycuTzwmv2JReVR2GSQOrnm8qahYAbhJwJjirQWE
 RzdxPh7DpvgwvHxjythuL3Vcz3+BCP+eIXAfAe+4YiSMbAw4tcegeTC1Zfh90KP0
 LSNgymszhOVyRSg=
 =F8Ct
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/ehabkost-gl/tags/x86-next-pull-request' into staging

x86 queue, 2021-08-25

Bug fixes:
* Remove split lock detect in Snowridge CPU model (Chenyi Qiang)
* Remove AVX_VNNI feature from Cooperlake cpu model (Yang Zhong)

# gpg: Signature made Wed 25 Aug 2021 20:53:59 BST
# gpg:                using RSA key 5A322FD5ABC4D3DBACCFD1AA2807936F984DC5A6
# gpg:                issuer "ehabkost@redhat.com"
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>" [full]
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF  D1AA 2807 936F 984D C5A6

* remotes/ehabkost-gl/tags/x86-next-pull-request:
  i386/cpu: Remove AVX_VNNI feature from Cooperlake cpu model
  target/i386: Remove split lock detect in Snowridge CPU model

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-26 10:42:34 +01:00
Peter Maydell
d8ae530ec0 MIPS patches queue
- minor simplifications in PREF / JR opcodes
 - merge 32-bit/64-bit Release6 decodetree definitions
 - converted NEC Vr54xx extension opcodes to decodetree
 - housekeeping in gen_helper() macros
 - replace TARGET_WORDS_BIGENDIAN #ifdef'ry by cpu_is_bigendian()
 - allow Loongson 3A1000 to use up to 48-bit VAddr
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmEmI78ACgkQ4+MsLN6t
 wN4RLw/+PIBVfaDvASnFa2f9b4SGQ2WiM0LuLh/sN6ieMiFccqJZPq2Dz5GCQsR7
 7v2+MZ9gcAoWH/NCft+vbXGwiP9exoChb8c65Gm0GdpQ3sKPYXJJAi/D0vjuPMGe
 Yk8DrfERavu4bcpXPaC+B3p8krii88W+vEYvmvWfirD+gWxjF/HviLzHQK/62YMK
 N695BYVav5Fd3fjqn3p7Kw/WdP++NS757G53dSF/r5l1wFEGFZAuYW7R2rWDQsz+
 yWvPUkIFoJ9OXitKw01FVaDNXF3a1efMhZjFvCr0EU0eF4qsxAywXomC4CDpAo+Y
 s15aNZoxWM4D0eEoNLm874QAgNu9txPJPg5kuZVpBDwdTKWMrShj5+m/QlWqVRcA
 mj7Ff2/B50mmB8aGfkQm82DpnqNXk9Vr1Y4hGzKrSOc1NGZItnpX2XJfqymEF7M9
 9SW73jF6X2871FyiRvd5cO9TGlBieMNMlkenuxiyQNvIgocw1FX606EDji/aFp2e
 KehjWw/2JCmBC1uUhaYqks4db7B8MSeVl8G3Dwx3lxnuz4xson/yscAxengZBR2r
 clubyAoEa7+6sc2DhflLGlWfQpiOBDW4FFCW37H7KhVnJXFTomuiMBSSBc+njLMi
 NDT7wAMCyMXtmZtx3zeWZpppqdc3doaGm3Bq6HDWEkiYaOe6TxA=
 =ceCP
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/philmd/tags/mips-20210825' into staging

MIPS patches queue

- minor simplifications in PREF / JR opcodes
- merge 32-bit/64-bit Release6 decodetree definitions
- converted NEC Vr54xx extension opcodes to decodetree
- housekeeping in gen_helper() macros
- replace TARGET_WORDS_BIGENDIAN #ifdef'ry by cpu_is_bigendian()
- allow Loongson 3A1000 to use up to 48-bit VAddr

# gpg: Signature made Wed 25 Aug 2021 12:04:31 BST
# gpg:                using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE
# gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [full]
# Primary key fingerprint: FAAB E75E 1291 7221 DCFD  6BB2 E3E3 2C2C DEAD C0DE

* remotes/philmd/tags/mips-20210825: (28 commits)
  target/mips: Replace TARGET_WORDS_BIGENDIAN by cpu_is_bigendian()
  target/mips: Store CP0_Config0 in DisasContext
  target/mips: Replace GET_LMASK64() macro by get_lmask(64) function
  target/mips: Replace GET_LMASK() macro by get_lmask(32) function
  target/mips: Call cpu_is_bigendian & inline GET_OFFSET in ld/st helpers
  target/mips: Define gen_helper() macros in translate.h
  target/mips: Use tcg_constant_i32() in generate_exception_err()
  target/mips: Inline gen_helper_0e0i()
  target/mips: Inline gen_helper_1e1i() call in op_ld_INSN() macros
  target/mips: Simplify gen_helper() macros by using tcg_constant_i32()
  target/mips: Use tcg_constant_i32() in gen_helper_0e2i()
  target/mips: Remove gen_helper_1e2i()
  target/mips: Remove gen_helper_0e3i()
  target/mips: Remove duplicated check_cp1_enabled() calls in Loongson EXT
  target/mips: Allow Loongson 3A1000 to use up to 48-bit VAddr
  target/mips: Document Loongson-3A CPU definitions
  target/mips: Convert Vr54xx MSA* opcodes to decodetree
  target/mips: Convert Vr54xx MUL* opcodes to decodetree
  target/mips: Convert Vr54xx MACC* opcodes to decodetree
  target/mips: Introduce decodetree structure for NEC Vr54xx extension
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-25 21:09:48 +01:00
Yang Zhong
f429dbf8fc i386/cpu: Remove AVX_VNNI feature from Cooperlake cpu model
The AVX_VNNI feature is not in Cooperlake platform, remove it
from cpu model.

Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20210820054611.84303-1-yang.zhong@intel.com>
Fixes: c1826ea6a0 ("i386/cpu: Expose AVX_VNNI instruction to guest")
Cc: qemu-stable@nongnu.org
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2021-08-25 12:36:49 -04:00
Chenyi Qiang
56bb24e543 target/i386: Remove split lock detect in Snowridge CPU model
At present, there's no mechanism intelligent enough to virtualize split
lock detection correctly. Remove it in Snowridge CPU model to avoid the
feature exposure.

Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Message-Id: <20210630012053.10098-1-chenyi.qiang@intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2021-08-25 12:33:37 -04:00
Philippe Mathieu-Daudé
bf78469cc8 target/mips: Replace TARGET_WORDS_BIGENDIAN by cpu_is_bigendian()
Add the inlined cpu_is_bigendian() function in "translate.h".

Replace the TARGET_WORDS_BIGENDIAN #ifdef'ry by calls to
cpu_is_bigendian().

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210818164321.2474534-6-f4bug@amsat.org>
2021-08-25 13:02:14 +02:00
Philippe Mathieu-Daudé
0cfd392d7b target/mips: Store CP0_Config0 in DisasContext
Most TCG helpers only have access to a DisasContext pointer,
not CPUMIPSState. Store a copy of CPUMIPSState::CP0_Config0
in DisasContext so we can access it from TCG helpers.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210818164321.2474534-5-f4bug@amsat.org>
2021-08-25 13:02:14 +02:00
Philippe Mathieu-Daudé
23a04dcdf6 target/mips: Replace GET_LMASK64() macro by get_lmask(64) function
The target endianess information is stored in the BigEndian
bit of the Config0 register in CP0.

Replace the GET_LMASK() macro by an inlined get_lmask() function,
passing CPUMIPSState and the word size as argument.

We can remove another use of the TARGET_WORDS_BIGENDIAN definition.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210818215517.2560994-4-f4bug@amsat.org>
2021-08-25 13:02:14 +02:00
Philippe Mathieu-Daudé
4885b99a6e target/mips: Replace GET_LMASK() macro by get_lmask(32) function
The target endianess information is stored in the BigEndian
bit of the Config0 register in CP0.

Replace the GET_LMASK() macro by an inlined get_lmask() function,
passing CPUMIPSState and the word size as argument.

We can remove one use of the TARGET_WORDS_BIGENDIAN definition.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210818215517.2560994-3-f4bug@amsat.org>
2021-08-25 13:02:14 +02:00
Philippe Mathieu-Daudé
5b3cc34c34 target/mips: Call cpu_is_bigendian & inline GET_OFFSET in ld/st helpers
The target endianess information is stored in the BigEndian
bit of the Config0 register in CP0.

As a first step, inline the GET_OFFSET() macro, calling
cpu_is_bigendian() to get the 'direction' of the offset.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210818215517.2560994-2-f4bug@amsat.org>
2021-08-25 13:02:14 +02:00
Philippe Mathieu-Daudé
761533fc9a target/mips: Define gen_helper() macros in translate.h
To be able to split some code calling the gen_helper() macros
out of the huge translate.c, we need to define them in the
'translate.h' local header.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210816205107.2051495-9-f4bug@amsat.org>
2021-08-25 13:02:14 +02:00
Philippe Mathieu-Daudé
a8b18de7f5 target/mips: Use tcg_constant_i32() in generate_exception_err()
excp/err are temporaries input, so we can replace tcg_const_i32()
calls by tcg_constant_i32() equivalent.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210816205107.2051495-8-f4bug@amsat.org>
2021-08-25 13:02:14 +02:00
Philippe Mathieu-Daudé
ae71abadd5 target/mips: Inline gen_helper_0e0i()
gen_helper_0e0i() is one-line long and is only used twice:
simply inline it.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210816205107.2051495-7-f4bug@amsat.org>
2021-08-25 13:02:14 +02:00
Philippe Mathieu-Daudé
a1b4b060d7 target/mips: Inline gen_helper_1e1i() call in op_ld_INSN() macros
gen_helper_1e1i() is one-line long and is used in one place:
simply inline it.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210816205107.2051495-6-f4bug@amsat.org>
2021-08-25 13:02:14 +02:00
Philippe Mathieu-Daudé
26fe92763a target/mips: Simplify gen_helper() macros by using tcg_constant_i32()
In all call sites the last argument is always used as a
read-only value, so we can replace tcg_const_i32() temporary
by tcg_constant_i32().

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210816205107.2051495-5-f4bug@amsat.org>
2021-08-25 13:02:14 +02:00
Philippe Mathieu-Daudé
78bdd38865 target/mips: Use tcg_constant_i32() in gen_helper_0e2i()
$rt register is used read-only, so we can replace tcg_const_i32()
temporary by tcg_constant_i32().

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210816205107.2051495-4-f4bug@amsat.org>
2021-08-25 13:02:14 +02:00
Philippe Mathieu-Daudé
53152abfc1 target/mips: Remove gen_helper_1e2i()
gen_helper_1e2i() is unused since commit 33a07fa2db
("target/mips: reimplement SC instruction emulation
and use cmpxchg"), remove it.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210816205107.2051495-3-f4bug@amsat.org>
2021-08-25 13:02:14 +02:00
Philippe Mathieu-Daudé
b24339bcd0 target/mips: Remove gen_helper_0e3i()
gen_helper_0e3i() is unused since commit 895c2d0435
("target-mips: switch to AREG0 free mode"), remove it.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210816205107.2051495-2-f4bug@amsat.org>
2021-08-25 13:02:14 +02:00
Philippe Mathieu-Daudé
c1feb46d12 target/mips: Remove duplicated check_cp1_enabled() calls in Loongson EXT
We already call check_cp1_enabled() earlier in the "pre-conditions"
checks for GSLWXC1 and GSLDXC1 in gen_loongson_lsdc2() prologue.
Remove the duplicated calls.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Huacai Chen <chenhuacai@loongson.cn>
Message-Id: <20210816001031.1720432-1-f4bug@amsat.org>
2021-08-25 13:02:14 +02:00
Philippe Mathieu-Daudé
71ed30b7d4 target/mips: Allow Loongson 3A1000 to use up to 48-bit VAddr
Per the manual '龙芯 GS264 处理器核用户手册' v1.0, chapter
1.1.5 SEGBITS: the 3A1000 (based on GS464 core) implements
48 virtual address bits in each 64-bit segment, not 40.

Fixes: af868995e1 ("target/mips: Add Loongson-3 CPU definition")
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Huacai Chen <chenhuacai@loongson.cn>
Message-Id: <20210813110149.1432692-3-f4bug@amsat.org>
2021-08-25 13:02:14 +02:00
Philippe Mathieu-Daudé
98d207cf9c target/mips: Document Loongson-3A CPU definitions
Document the cores on which each Loongson-3A CPU is based (see
commit af868995e1, "target/mips: Add Loongson-3 CPU definition").

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Huacai Chen <chenhuacai@loongson.cn>
Message-Id: <20210813110149.1432692-2-f4bug@amsat.org>
2021-08-25 13:02:14 +02:00
Philippe Mathieu-Daudé
bf7720024c target/mips: Convert Vr54xx MSA* opcodes to decodetree
Convert the following Integer Multiply-Accumulate opcodes:

 * MSAC         Multiply, negate, accumulate, and move LO
 * MSACHI       Multiply, negate, accumulate, and move HI
 * MSACHIU      Unsigned multiply, negate, accumulate, and move HI
 * MSACU        Unsigned multiply, negate, accumulate, and move LO

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210808173018.90960-8-f4bug@amsat.org>
2021-08-25 13:02:14 +02:00
Philippe Mathieu-Daudé
a5e2932068 target/mips: Convert Vr54xx MUL* opcodes to decodetree
Convert the following Integer Multiply-Accumulate opcodes:

 * MULHI        Multiply and move HI
 * MULHIU       Unsigned multiply and move HI
 * MULS         Multiply, negate, and move LO
 * MULSHI       Multiply, negate, and move HI
 * MULSHIU      Unsigned multiply, negate, and move HI
 * MULSU        Unsigned multiply, negate, and move LO

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210808173018.90960-7-f4bug@amsat.org>
2021-08-25 13:02:14 +02:00
Philippe Mathieu-Daudé
5fa38eedbd target/mips: Convert Vr54xx MACC* opcodes to decodetree
Convert the following Integer Multiply-Accumulate opcodes:

 * MACC         Multiply, accumulate, and move LO
 * MACCHI       Multiply, accumulate, and move HI
 * MACCHIU      Unsigned multiply, accumulate, and move HI
 * MACCU        Unsigned multiply, accumulate, and move LO

Since all opcodes are generated using the same pattern, we
add the gen_helper_mult_acc_t typedef and MULT_ACC() macro
to remove boilerplate code.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210808173018.90960-6-f4bug@amsat.org>
2021-08-25 13:02:14 +02:00
Philippe Mathieu-Daudé
9d00539239 target/mips: Introduce decodetree structure for NEC Vr54xx extension
The decoder is called but doesn't decode anything. This will
ease reviewing the next commit.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210801235926.3178085-3-f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 13:02:14 +02:00
Philippe Mathieu-Daudé
6629f79f53 target/mips: Extract NEC Vr54xx helpers to vr54xx_helper.c
Extract NEC Vr54xx helpers from op_helper.c to a new file:
'vr54xx_helper.c'.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201120210844.2625602-14-f4bug@amsat.org>
2021-08-25 13:02:14 +02:00
Philippe Mathieu-Daudé
07565cbf4a target/mips: Extract NEC Vr54xx helper definitions
Extract the NEC Vr54xx helper definitions to
'vendor-vr54xx_helper.h'.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201120210844.2625602-15-f4bug@amsat.org>
2021-08-25 13:02:14 +02:00
Philippe Mathieu-Daudé
fb3164e412 target/mips: Introduce generic TRANS() macro for decodetree helpers
Plain copy/paste of the TRANS() macro introduced in the PPC
commit f2aabda8ac ("target/ppc: Move D/DS/X-form integer
loads to decodetree") to the MIPS target.

Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210808173018.90960-2-f4bug@amsat.org>
2021-08-25 13:02:14 +02:00
Philippe Mathieu-Daudé
34fe9fa368 target/mips: Rename 'rtype' as 'r'
We'll soon have more opcode and decoded arguments, and 'rtype'
is not very helpful. Naming it simply 'r' ease reviewing the
.decode files when we have many opcodes.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210801234202.3167676-5-f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 13:02:14 +02:00
Philippe Mathieu-Daudé
12f79f1173 target/mips: Merge 32-bit/64-bit Release6 decodetree definitions
We don't need to maintain 2 sets of decodetree definitions.
Merge them into a single file.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210801234202.3167676-4-f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 13:02:06 +02:00
Philippe Mathieu-Daudé
4919f69c65 target/mips: Decode vendor extensions before MIPS ISAs
In commit ffc672aa97 ("target/mips/tx79: Move MFHI1 / MFLO1
opcodes to decodetree") we misplaced the decoder call. Move
it to the correct place.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210801234202.3167676-3-f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 13:00:43 +02:00
Philippe Mathieu-Daudé
2e176eaf9c target/mips: Simplify PREF opcode
check_insn() checks for any bit in the set, and INSN_R5900 is
just another bit added to the set. No need to special-case it.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210801234202.3167676-2-f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 13:00:37 +02:00
Philippe Mathieu-Daudé
c8b69a2a92 target/mips: Remove JR opcode unused arguments
JR opcode (Jump Register) only takes 1 argument, $rs.
JALR (Jump And Link Register) takes 3: $rs, $rd and $hint.

Commit 6af0bf9c7c added their processing into decode_opc() as:

    case 0x08 ... 0x09: /* Jumps */
        gen_compute_branch(ctx, op1 | EXT_SPECIAL, rs, rd, sa);

having both opcodes handled in the same function: gen_compute_branch.

Per JR encoding, both $rd and $hint ('sa') are decoded as zero.

Later this code got extracted to decode_opc_special(),
commit 7a387fffce used definitions instead of magic values:

    case OPC_JR ... OPC_JALR:
        gen_compute_branch(ctx, op1, rs, rd, sa);

Finally commit 0aefa33318 moved OPC_JR out of decode_opc_special,
to a new 'decode_opc_special_legacy' function:

  @@ -15851,6 +15851,9 @@ static void decode_opc_special_legacy(CPUMIPSState *env, DisasContext *ctx)
  +    case OPC_JR:
  +        gen_compute_branch(ctx, op1, 4, rs, rd, sa);
  +        break;

  @@ -15933,7 +15936,7 @@ static void decode_opc_special(CPUMIPSState *env, DisasContext *ctx)
  -    case OPC_JR ... OPC_JALR:
  +    case OPC_JALR:
           gen_compute_branch(ctx, op1, 4, rs, rd, sa);
           break;

Since JR is now handled individually, it is pointless to decode
and pass it unused arguments. Replace them by simple zero value
to avoid confusion with this opcode.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210730225507.2642827-1-f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 12:49:09 +02:00
Hamza Mahfooz
dfa0d9b80e target/arm: kvm: use RCU_READ_LOCK_GUARD() in kvm_arch_fixup_msi_route()
As per commit 5626f8c6d4 ("rcu: Add automatically released rcu_read_lock
variants"), RCU_READ_LOCK_GUARD() should be used instead of
rcu_read_{un}lock().

Signed-off-by: Hamza Mahfooz <someguy@effective-light.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20210727235201.11491-1-someguy@effective-light.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-25 10:48:50 +01:00
Peter Maydell
e534629296 target/arm: Implement M-profile trapping on division by zero
Unlike A-profile, for M-profile the UDIV and SDIV insns can be
configured to raise an exception on division by zero, using the CCR
DIV_0_TRP bit.

Implement support for setting this bit by making the helper functions
raise the appropriate exception.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210730151636.17254-3-peter.maydell@linaro.org
2021-08-25 10:48:50 +01:00
Peter Maydell
fc7a5038a6 target/arm: Re-indent sdiv and udiv helpers
We're about to make a code change to the sdiv and udiv helper
functions, so first fix their indentation and coding style.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210730151636.17254-2-peter.maydell@linaro.org
2021-08-25 10:48:50 +01:00
Peter Maydell
075e7e97e3 target/arm: Implement MVE interleaving loads/stores
Implement the MVE interleaving load/store functions VLD2, VLD4, VST2
and VST4.  VLD2 loads 16 bytes of data from memory and writes to 2
consecutive Qregs; VLD4 loads 16 bytes of data from memory and writes
to 4 consecutive Qregs.  The 'pattern' field in the encoding
determines the offset into memory which is accessed and also which
elements in the Qregs are written to.  (The intention is that a
sequence of four consecutive VLD4 with different pattern values
performs a complete de-interleaving load of 64 bytes into all
elements of the 4 Qregs.) VST2 and VST4 do the same, but for stores.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:50 +01:00
Peter Maydell
fac80f0856 target/arm: Implement MVE scatter-gather immediate forms
Implement the MVE VLDR/VSTR insns which do scatter-gather using base
addresses from Qm plus or minus an immediate offset (possibly with
writeback). Note that writeback is not predicated but it does have
to honour ECI state, so we have to add an eci_mask check to the
VSTR_SG macros (the VLDR_SG macros already needed this to be able
to distinguish "skip beat" from "set predicated element to 0").

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:50 +01:00
Peter Maydell
dc18628b18 target/arm: Implement MVE scatter-gather insns
Implement the MVE gather-loads and scatter-stores which
form the address by adding a base value from a scalar
register to an offset in each element of a vector.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:50 +01:00
Peter Maydell
0f31e37c7f target/arm: Implement MVE VCTP
Implement the MVE VCTP insn, which sets the VPR.P0 predicate bits so
as to predicate any element at index Rn or greater is predicated.  As
with VPNOT, this insn itself is predicable and subject to beatwise
execution.

The calculation of the mask is the same as is used to determine
ltpmask in mve_element_mask(), but we precalculate masklen in
generated code to avoid having to have 4 helpers specialized by size.

We put the decode line in with the low-overhead-loop insns in
t32.decode because it's logically part of that collection of insn
patterns, even though it is an MVE only insn.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:50 +01:00
Peter Maydell
fea3958fa1 target/arm: Implement MVE VPNOT
Implement the MVE VPNOT insn, which inverts the bits in VPR.P0
(subject to both predication and to beatwise execution).

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:50 +01:00
Peter Maydell
1241f148d5 target/arm: Implement MVE VMOV to/from 2 general-purpose registers
Implement the MVE VMOV forms that move data between 2 general-purpose
registers and 2 32-bit lanes in a vector register.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:50 +01:00
Peter Maydell
d5c571ea6d target/arm: Implement MVE VMAXA, VMINA
Implement the MVE VMAXA and VMINA insns, which take the absolute
value of the signed elements in the input vector and then accumulate
the unsigned max or min into the destination vector.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:50 +01:00
Peter Maydell
398e7cd3cd target/arm: Implement MVE VQABS, VQNEG
Implement the MVE 1-operand saturating operations VQABS and VQNEG.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:50 +01:00
Peter Maydell
8be9a25058 target/arm: Implement MVE saturating doubling multiply accumulates
Implement the MVE saturating doubling multiply accumulate insns
VQDMLAH, VQRDMLAH, VQDMLASH and VQRDMLASH.  These perform a multiply,
double, add the accumulator shifted by the element size, possibly
round, saturate to twice the element size, then take the high half of
the result.  The *MLAH insns do vector * scalar + vector, and the
*MLASH insns do vector * vector + scalar.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:50 +01:00
Peter Maydell
c69e34c6de target/arm: Implement MVE VMLA
Implement the MVE VMLA insn, which multiplies a vector by a scalar
and accumulates into another vector.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:50 +01:00
Peter Maydell
f0ffff5163 target/arm: Implement MVE VMLADAV and VMLSLDAV
Implement the MVE VMLADAV and VMLSLDAV insns.  Like the VMLALDAV and
VMLSLDAV insns already implemented, these accumulate multiplied
vector elements; but they accumulate a 32-bit result rather than a
64-bit one.

Note that these encodings overlap with what would be RdaHi=0b111 for
VMLALDAV, VMLSLDAV, VRMLALDAVH and VRMLSLDAVH.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:49 +01:00
Peter Maydell
640cdf20a2 target/arm: Rename MVEGenDualAccOpFn to MVEGenLongDualAccOpFn
The MVEGenDualAccOpFn is a bit misnamed, since it is used for
the "long dual accumulate" operations that use a 64-bit
accumulator. Rename it to MVEGenLongDualAccOpFn so we can
use the former name for the 32-bit accumulator insns.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:49 +01:00
Peter Maydell
54dc78a901 target/arm: Implement MVE narrowing moves
Implement the MVE narrowing move insns VMOVN, VQMOVN and VQMOVUN.
These take a double-width input, narrow it (possibly saturating) and
store the result to either the top or bottom half of the output
element.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:49 +01:00
Peter Maydell
7f061c0ab9 target/arm: Implement MVE VABAV
Implement the MVE VABAV insn, which computes absolute differences
between elements of two vectors and accumulates the result into
a general purpose register.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:49 +01:00
Peter Maydell
688ba4cf33 target/arm: Implement MVE integer min/max across vector
Implement the MVE integer min/max across vector insns
VMAXV, VMINV, VMAXAV and VMINAV, which find the maximum
from the vector elements and a general purpose register,
and store the maximum back into the general purpose
register.

These insns overlap with VRMLALDAVH (they use what would
be RdaHi=0b110).

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:49 +01:00
Peter Maydell
345910f8c1 target/arm: Move 'x' and 'a' bit definitions into vmlaldav formats
All the users of the vmlaldav formats have an 'x bit in bit 12 and an
'a' bit in bit 5; move these to the format rather than specifying them
in each insn pattern.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:49 +01:00
Peter Maydell
1b15a97d4c target/arm: Implement MVE shift-by-scalar
Implement the MVE instructions which perform shifts by a scalar.
These are VSHL T2, VRSHL T2, VQSHL T1 and VQRSHL T2.  They take the
shift amount in a general purpose register and shift every element in
the vector by that amount.

Mostly we can reuse the helper functions for shift-by-immediate; we
do need two new helpers for VQRSHL.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:49 +01:00
Peter Maydell
6b895bf8fb target/arm: Implement MVE VMLAS
Implement the MVE VMLAS insn, which multiplies a vector by a vector
and adds a scalar.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:49 +01:00
Peter Maydell
c386443b16 target/arm: Implement MVE VPSEL
Implement the MVE VPSEL insn, which sets each byte of the destination
vector Qd to the byte from either Qn or Qm depending on the value of
the corresponding bit in VPR.P0.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:49 +01:00
Peter Maydell
cce81873bc target/arm: Implement MVE integer vector-vs-scalar comparisons
Implement the MVE integer vector comparison instructions that compare
each element against a scalar from a general purpose register.  These
are "VCMP (vector)" encodings T4, T5 and T6 and "VPT (vector)"
encodings T4, T5 and T6.

We have to move the decodetree pattern for VPST, because it
overlaps with VCMP T4 with size = 0b11.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:49 +01:00
Peter Maydell
eff5d9a9bd target/arm: Implement MVE integer vector comparisons
Implement the MVE integer vector comparison instructions.  These are
"VCMP (vector)" encodings T1, T2 and T3, and "VPT (vector)" encodings
T1, T2 and T3.

These insns compare corresponding elements in each vector, and update
the VPR.P0 predicate bits with the results of the comparison.  VPT
also sets the VPR.MASK01 and VPR.MASK23 fields -- it is effectively
"VCMP then VPST".

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:49 +01:00
Peter Maydell
552517861c target/arm: Factor out gen_vpst()
Factor out the "generate code to update VPR.MASK01/MASK23" part of
trans_VPST(); we are going to want to reuse it for the VPT insns.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:49 +01:00
Peter Maydell
395b92d50e target/arm: Implement MVE incrementing/decrementing dup insns
Implement the MVE incrementing/decrementing dup insns VIDUP, VDDUP,
VIWDUP and VDWDUP.  These fill the elements of a vector with
successively incrementing values, starting at the offset specified in
a general purpose register.  The final value of the offset is written
back to this register.  The wrapping variants take a second general
purpose register which specifies the point where the count should
wrap back to 0.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:49 +01:00
Peter Maydell
c1bd78cb06 target/arm: Implement MVE VMULL (polynomial)
Implement the MVE VMULL (polynomial) insn.  Unlike Neon, this comes
in two flavours: 8x8->16 and a 16x16->32.  Also unlike Neon, the
inputs are in either the low or the high half of each double-width
element.

The assembler for this insn indicates the size with "P8" or "P16",
encoded into bit 28 as size = 0 or 1. We choose to follow the
same encoding as VQDMULL and decode this into a->size as MO_16
or MO_32 indicating the size of the result elements. This then
carries through to the helper function names where it then
matches up with the existing pmull_h() which does an 8x8->16
operation and a new pmull_w() which does the 16x16->32.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:49 +01:00
Peter Maydell
41704cc262 target/arm: Fix VLDRB/H/W for predicated elements
For vector loads, predicated elements are zeroed, instead of
retaining their previous values (as happens for most data
processing operations). This means we need to distinguish
"beat not executed due to ECI" (don't touch destination
element) from "beat executed but predicated out" (zero
destination element).

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:49 +01:00
Peter Maydell
e3152d02da target/arm: Fix VPT advance when ECI is non-zero
We were not paying attention to the ECI state when advancing the VPT
state.  Architecturally, VPT state advance happens for every beat
(see the pseudocode VPTAdvance()), so on every beat the 4 bits of
VPR.P0 corresponding to the current beat are inverted if required,
and at the end of beats 1 and 3 the VPR MASK fields are updated.
This means that if the ECI state says we should not be executing all
4 beats then we need to skip some of the updating of the VPR that we
currently do in mve_advance_vpt().

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:49 +01:00
Peter Maydell
e0d40070e1 target/arm: Factor out mve_eci_mask()
In some situations we need a mask telling us which parts of the
vector correspond to beats that are not being executed because of
ECI, separately from the combined "which bytes are predicated away"
mask.  Factor this mask calculation out of mve_element_mask() into
its own function.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:49 +01:00
Peter Maydell
3f4f1880c2 target/arm: Fix calculation of LTP mask when LR is 0
In mve_element_mask(), we calculate a mask for tail predication which
should have a number of 1 bits based on the value of LR.  However,
our MAKE_64BIT_MASK() macro has undefined behaviour when passed a
zero length.  Special case this to give the all-zeroes mask we
require.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:48 +01:00
Peter Maydell
fdcf2269c4 target/arm: Fix MVE 48-bit SQRSHRL for small right shifts
We got an edge case wrong in the 48-bit SQRSHRL implementation: if
the shift is to the right, although it always makes the result
smaller than the input value it might not be within the 48-bit range
the result is supposed to be if the input had some bits in [63..48]
set and the shift didn't bring all of those within the [47..0] range.

Handle this similarly to the way we already do for this case in
do_uqrshl48_d(): extend the calculated result from 48 bits,
and return that if not saturating or if it doesn't change the
result; otherwise fall through to return a saturated value.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:48 +01:00
Peter Maydell
95351aa76c target/arm: Fix 48-bit saturating shifts
In do_sqrshl48_d() and do_uqrshl48_d() we got some of the edge
cases wrong and failed to saturate correctly:

(1) In do_sqrshl48_d() we used the same code that do_shrshl_bhs()
does to obtain the saturated most-negative and most-positive 48-bit
signed values for the large-shift-left case.  This gives (1 << 47)
for saturate-to-most-negative, but we weren't sign-extending this
value to the 64-bit output as the pseudocode requires.

(2) For left shifts by less than 48, we copied the "8/16 bit" code
from do_sqrshl_bhs() and do_uqrshl_bhs().  This doesn't do the right
thing because it assumes the C type we're working with is at least
twice the number of bits we're saturating to (so that a shift left by
bits-1 can't shift anything off the top of the value).  This isn't
true for bits == 48, so we would incorrectly return 0 rather than the
most-positive value for situations like "shift (1 << 44) right by
20".  Instead check for saturation by doing the shift and signextend
and then testing whether shifting back left again gives the original
value.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:48 +01:00
Peter Maydell
a5e59e8dcb target/arm: Fix mask handling for MVE narrowing operations
In the MVE helpers for the narrowing operations (DO_VSHRN and
DO_VSHRN_SAT) we were using the wrong bits of the predicate mask for
the 'top' versions of the insn.  This is because the loop works over
the double-sized input elements and shifts the predicate mask by that
many bits each time, but when we write out the half-sized output we
must look at the mask bits for whichever half of the element we are
writing to.

Correct this by shifting the whole mask right by ESIZE bits for the
'top' insns.  This allows us also to simplify the saturation bit
checking (where we had noticed that we needed to look at a different
mask bit for the 'top' insn.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:48 +01:00
Peter Maydell
ed5a59d61f target/arm: Fix signed VADDV
A cut-and-paste error meant we handled signed VADDV like
unsigned VADDV; fix the type used.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:48 +01:00
Peter Maydell
c88ff88498 target/arm: Fix MVE VSLI by 0 and VSRI by <dt>
In the MVE shift-and-insert insns, we special case VSLI by 0
and VSRI by <dt>. VSRI by <dt> means "don't update the destination",
which is what we've implemented. However VSLI by 0 is "set
destination to the input", so we don't want to use the same
special-casing that we do for VSRI by <dt>.

Since the generic logic gives the right answer for a shift
by 0, just use that.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:48 +01:00
Peter Maydell
aa29190826 target/arm: Print MVE VPR in CPU dumps
Include the MVE VPR register value in the CPU dumps produced by
arm_cpu_dump_state() if we are printing FPU information. This
makes it easier to interpret debug logs when predication is
active.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:48 +01:00
Peter Maydell
9dacf0764b target/arm: Note that we handle VMOVL as a special case of VSHLL
Although the architecture doesn't define it as an alias, VMOVL
(vector move long) is encoded as a VSHLL with a zero shift.
Add a comment in the decode file noting that we handle VMOVL
as part of VSHLL.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-08-25 10:48:48 +01:00
Lara Lazier
24d84c7e48 target/i386: Fixed size of constant for Windows
~0UL has 64 bits on Linux and 32 bits on Windows.

Fixes: https://gitlab.com/qemu-project/qemu/-/issues/512
Reported-by: Volker Rümelin <vr_qemu@t-online.de>
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210812111056.26926-1-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-08-13 14:31:49 +02:00
Philippe Mathieu-Daudé
236f6709ae target/nios2: Mark raise_exception() as noreturn
Raised exceptions don't return, so mark the helper with noreturn.

Fixes: 032c76bc6f ("nios2: Add architecture emulation support")
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210729101315.2318714-1-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-30 08:23:12 -10:00
Peter Maydell
768832575d Bugfixes.
-----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmECY7oUHHBib256aW5p
 QHJlZGhhdC5jb20ACgkQv/vSX3jHroNRyggAmo+UIFjjOfUK75gb9qWFh/o3Uxga
 ycey2/3KmFUXLdXT5ybZ9TpBbDklIIND5iphn3+ntmJZRnkY6GXbF2StBodTX58o
 /K+nuEaTVV+CHsLdMIFF/ao6QBw8AFf7QMjqyHZuq6NIpnRxzX7oKajMJwlf4rMQ
 y0OpD/PoSsO3DnodIjnWe+WxMlmojafJVNPGMbJ6urYYZfcZN697oMFJLLWWd0ok
 KCKo+h+mcUkJxXEl9zAHf0GeMdI2CVwkECpeC7/1JQFvoIAVOVsUJj7LsLrnDC9D
 u1uaaaiha9JkGI5Q+GbWPYXyb2NmlhzBbTQA+AOKYDu4srsFKlOiBmnilg==
 =kLKX
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/bonzini-gitlab/tags/for-upstream' into staging

Bugfixes.

# gpg: Signature made Thu 29 Jul 2021 09:15:54 BST
# gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg:                issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
#      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83

* remotes/bonzini-gitlab/tags/for-upstream:
  libvhost-user: fix -Werror=format= warnings with __u64 fields
  meson: fix meson 0.58 warning with libvhost-user subproject
  target/i386: fix typo in ctl_has_irq
  target/i386: Added consistency checks for event injection
  configure: Add -Werror to avx2, avx512 tests
  Makefile: ignore long options
  i386: assert 'cs->kvm_state' is not null

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-07-29 16:07:02 +01:00
Paolo Bonzini
f594bfb79f target/i386: fix typo in ctl_has_irq
The shift constant was incorrect, causing int_prio to always be zero.

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
[Rewritten commit message since v1 had already been included. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-29 10:15:52 +02:00
Lara Lazier
eceb4f0112 target/i386: Added consistency checks for event injection
VMRUN exits with SVM_EXIT_ERR if either:
 * The event injected has a reserved type.
 * When the event injected is of type 3 (exception), and the vector that
 has been specified does not correspond to an exception.

This does not fix the entire exc_inj test in kvm-unit-tests.

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210725090855.19713-1-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-29 10:15:52 +02:00
Vitaly Kuznetsov
e4adb09f79 i386: assert 'cs->kvm_state' is not null
Coverity reports potential NULL pointer dereference in
get_supported_hv_cpuid_legacy() when 'cs->kvm_state' is NULL. While
'cs->kvm_state' can indeed be NULL in hv_cpuid_get_host(),
kvm_hyperv_expand_features() makes sure that it only happens when
KVM_CAP_SYS_HYPERV_CPUID is supported and KVM_CAP_SYS_HYPERV_CPUID
implies KVM_CAP_HYPERV_CPUID so get_supported_hv_cpuid_legacy() is
never really called. Add asserts to strengthen the protection against
broken KVM behavior.

Coverity: CID 1458243
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210716115852.418293-1-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-29 10:15:51 +02:00
Matheus Ferst
2d1154bd95 target/ppc: Ease L=0 requirement on cmp/cmpi/cmpl/cmpli for ppc32
In commit 8f0a4b6a9b, we started to require L=0 for ppc32 to match what
The Programming Environments Manual say:

"For 32-bit implementations, the L field must be cleared, otherwise
the instruction form is invalid."

The stricter behavior, however, broke AROS boot on sam460ex, which is a
regression from 6.0. This patch partially reverts the change, raising
the exception only for CPUs known to require L=0 (e500 and e500mc) and
logging a guest error for other cases.

Both behaviors are acceptable by the PowerISA, which allows "the system
illegal instruction error handler to be invoked or yield boundedly
undefined results."

Reported-by: BALATON Zoltan <balaton@eik.bme.hu>
Fixes: 8f0a4b6a9b ("target/ppc: Move cmp/cmpi/cmpl/cmpli to decodetree")
Tested-by: BALATON Zoltan <balaton@eik.bme.hu>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20210720135507.2444635-1-matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-07-29 10:59:49 +10:00
Richard Henderson
b3d52804c5 target/arm: Add sve-default-vector-length cpu property
Mirror the behavour of /proc/sys/abi/sve_default_vector_length
under the real linux kernel.  We have no way of passing along
a real default across exec like the kernel can, but this is a
decent way of adjusting the startup vector length of a process.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/482
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20210723203344.968563-4-richard.henderson@linaro.org
[PMM: tweaked docs formatting, document -1 special-case,
 added fixup patch from RTH mentioning QEMU's maximum veclen.]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-07-27 10:57:40 +01:00
Richard Henderson
ce440581c1 target/arm: Export aarch64_sve_zcr_get_valid_len
Rename from sve_zcr_get_valid_len and make accessible
from outside of helper.c.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20210723203344.968563-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-07-27 10:57:40 +01:00
Richard Henderson
dc0bc8e785 target/arm: Correctly bound length in sve_zcr_get_valid_len
Currently, our only caller is sve_zcr_len_for_el, which has
already masked the length extracted from ZCR_ELx, so the
masking done here is a nop.  But we will shortly have uses
from other locations, where the length will be unmasked.

Saturate the length to ARM_MAX_VQ instead of truncating to
the low 4 bits.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20210723203344.968563-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-07-27 10:57:40 +01:00
Mao Zhongyi
a476b21672 docs: Update path that mentions deprecated.rst
Missed in commit f3478392 "docs: Move deprecation, build
and license info out of system/"

Signed-off-by: Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20210723065828.1336760-1-maozhongyi@cmss.chinamobile.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-07-27 10:57:40 +01:00
Peter Maydell
d4f6883912 target/arm: Report M-profile alignment faults correctly to the guest
For M-profile, we weren't reporting alignment faults triggered by the
generic TCG code correctly to the guest.  These get passed into
arm_v7m_cpu_do_interrupt() as an EXCP_DATA_ABORT with an A-profile
style exception.fsr value of 1.  We didn't check for this, and so
they fell through into the default of "assume this is an MPU fault"
and were reported to the guest as a data access violation MPU fault.

Report these alignment faults as UsageFaults which set the UNALIGNED
bit in the UFSR.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210723162146.5167-4-peter.maydell@linaro.org
2021-07-27 10:57:39 +01:00
Peter Maydell
0c317eb3dd target/arm: Add missing 'return's after calling v7m_exception_taken()
In do_v7m_exception_exit(), we perform various checks as part of
performing the exception return.  If one of these checks fails, the
architecture requires that we take an appropriate exception on the
existing stackframe.  We implement this by calling
v7m_exception_taken() to set up to take the new exception, and then
immediately returning from do_v7m_exception_exit() without proceeding
any further with the unstack-and-exception-return process.

In a couple of checks that are new in v8.1M, we forgot the "return"
statement, with the effect that if bad code in the guest tripped over
these checks we would set up to take a UsageFault exception but then
blunder on trying to also unstack and return from the original
exception, with the probable result that the guest would crash.

Add the missing return statements.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210723162146.5167-3-peter.maydell@linaro.org
2021-07-27 10:57:39 +01:00
Peter Maydell
888f470f12 target/arm: Enforce that M-profile SP low 2 bits are always zero
For M-profile, unlike A-profile, the low 2 bits of SP are defined to be
RES0H, which is to say that they must be hardwired to zero so that
guest attempts to write non-zero values to them are ignored.

Implement this behaviour by masking out the low bits:
 * for writes to r13 by the gdbstub
 * for writes to any of the various flavours of SP via MSR
 * for writes to r13 via store_reg() in generated code

Note that all the direct uses of cpu_R[] in translate.c are in places
where the register is definitely not r13 (usually because that has
been checked for as an UNDEFINED or UNPREDICTABLE case and handled as
UNDEF).

All the other writes to regs[13] in C code are either:
 * A-profile only code
 * writes of values we can guarantee to be aligned, such as
   - writes of previous-SP-value plus or minus a 4-aligned constant
   - writes of the value in an SP limit register (which we already
     enforce to be aligned)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210723162146.5167-2-peter.maydell@linaro.org
2021-07-27 10:57:39 +01:00
Peter Maydell
1d6f147f04 The Hexagon target was silently failing the SIGSEGV test because
the signal handler was not called.
 
 Patch 1/2 fixes the Hexagon target
 Patch 2/2 drops include qemu.h from target/hexagon/op_helper.c
 
 **** Changes in v2 ****
 Drop changes to linux-test.c due to intermittent failures on riscv
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABAgAGBQJg/doaAAoJEHsCRPsS3kQi/gIH+gJ6GBmIb7NNDt+tRYjsnOpZ
 QgmkM/cvOBhqo+dUkxWIDXA7i7ZzytBHHG5GoplVkZjm/S+e5aEsuEyqwL6KbcK7
 kB6NvnHA3n9npf5MGcUduHlvPPzDsO7Z4SLrfwkIliiWL/AJ4FzKqEGoviWv2YnN
 k+29YDSv11B1jgXriADBJVnWtCf2CGPsF7BiKMcguZ6Bj+q+fH1cPpe2EWN8R8n2
 D+La/M5qWEC2FcWPCkrCs61Pi/cV+L4M0IA6JAEm8K+MtoDWmsCNWaVsakiNWWg+
 FRiHg45z3cCBvQ+SLQQ4SvsaQriI3M/yIKD6ABNgAfurIiTj4YbHAbeTmfEFYOs=
 =YdBn
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/quic/tags/pull-hex-20210725' into staging

The Hexagon target was silently failing the SIGSEGV test because
the signal handler was not called.

Patch 1/2 fixes the Hexagon target
Patch 2/2 drops include qemu.h from target/hexagon/op_helper.c

**** Changes in v2 ****
Drop changes to linux-test.c due to intermittent failures on riscv

# gpg: Signature made Sun 25 Jul 2021 22:39:38 BST
# gpg:                using RSA key 7B0244FB12DE4422
# gpg: Good signature from "Taylor Simpson (Rock on) <tsimpson@quicinc.com>" [undefined]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg:          There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 3635 C788 CE62 B91F D4C5  9AB4 7B02 44FB 12DE 4422

* remotes/quic/tags/pull-hex-20210725:
  target/hexagon: Drop include of qemu.h
  Hexagon (target/hexagon) remove put_user_*/get_user_*

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-07-26 13:36:51 +01:00
Claudio Fontana
5b8978d804 i386: do not call cpudef-only models functions for max, host, base
Some cpu properties have to be set only for cpu models in builtin_x86_defs,
registered with x86_register_cpu_model_type, and not for
cpu models "base", "max", and the subclass "host".

These properties are the ones set by function x86_cpu_apply_props,
(also including kvm_default_props, tcg_default_props),
and the "vendor" property for the KVM and HVF accelerators.

After recent refactoring of cpu, which also affected these properties,
they were instead set unconditionally for all x86 cpus.

This has been detected as a bug with Nested on AMD with cpu "host",
as svm was not turned on by default, due to the wrongful setting of
kvm_default_props via x86_cpu_apply_props, which set svm to "off".

Rectify the bug introduced in commit "i386: split cpu accelerators"
and document the functions that are builtin_x86_defs-only.

Signed-off-by: Claudio Fontana <cfontana@suse.de>
Tested-by: Alexander Bulekov <alxndr@bu.edu>
Fixes: f5cc5a5c ("i386: split cpu accelerators from cpu.c,"...)
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/477
Message-Id: <20210723112921.12637-1-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-23 15:47:13 +02:00
Lara Lazier
3407259b20 target/i386: Added consistency checks for CR3
All MBZ in CR3 must be zero (APM2 15.5)
Added checks in both helper_vmrun and helper_write_crN.
When EFER.LMA is zero the upper 32 bits needs to be zeroed.

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210723112740.45962-1-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-23 15:46:20 +02:00
Peter Maydell
7b7ca8ebde Bugfixes.
-----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmD5bn8UHHBib256aW5p
 QHJlZGhhdC5jb20ACgkQv/vSX3jHroP3AQgAjQ/YziEk0SwA6MeaWfNIdrhj4+I/
 7JXmNlTLRC622IyN9NmJu57Y9Z9PXp/yCLe8V1cTz8K3lnMSBD1ZR1vWB2FtjUnX
 0McaLzcRpmJCeezcKSDJYYVkMQVz2OvNvNyPVK0qRPkt6+knt+9kWNxYAKfsSkln
 L7knUYi4gtM0w0+kQLReohVSJOACQMzl35jXPSArsrWwbZyKZ1pQwgvM3pGMmPv4
 xYNebGjYZRgTul0c5PZsLh9F3TueeTfRvhtwtuyyXPNcvIlgAeV40NuUAXYI6wKF
 FEKtoaBTZUBEOSKK5Z/fYlN+C+e8ItlGurrqvucmjlCqIxotggEf+DYUNQ==
 =/WeY
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/bonzini-gitlab/tags/for-upstream' into staging

Bugfixes.

# gpg: Signature made Thu 22 Jul 2021 14:11:27 BST
# gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg:                issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
#      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83

* remotes/bonzini-gitlab/tags/for-upstream:
  configure: Let --without-default-features disable vhost-kernel and vhost-vdpa
  configure: Fix the default setting of the "xen" feature
  configure: Allow vnc to get disabled with --without-default-features
  configure: Fix --without-default-features propagation to meson
  meson: fix dependencies for modinfo
  configure: Drop obsolete check for the alloc_size attribute
  target/i386: Added consistency checks for EFER
  target/i386: Added consistency checks for CR4
  target/i386: Added V_INTR_PRIO check to virtual interrupts
  qemu-config: restore "machine" in qmp_query_command_line_options()
  usb: fix usb-host dependency check
  chardev-spice: add missing module_obj directive
  vl: Parse legacy default_machine_opts
  qemu-config: fix memory leak on ferror()
  qemu-config: never call the callback after an error, fix leak

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-07-22 18:32:02 +01:00
Lara Lazier
d499f196fe target/i386: Added consistency checks for EFER
EFER.SVME has to be set, and EFER reserved bits must
be zero.
In addition the combinations
 * EFER.LMA or EFER.LME is non-zero and the processor does not support LM
 * non-zero EFER.LME and CR0.PG and zero CR4.PAE
 * non-zero EFER.LME and CR0.PG and zero CR0.PE
 * non-zero EFER.LME, CR0.PG, CR4.PAE, CS.L and CS.D
are all invalid.
(AMD64 Architecture Programmer's Manual, V2, 15.5)

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210721152651.14683-3-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-22 14:44:47 +02:00
Lara Lazier
213ff024a2 target/i386: Added consistency checks for CR4
All MBZ bits in CR4 must be zero. (APM2 15.5)
Added reserved bitmask and added checks in both
helper_vmrun and helper_write_crN.

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210721152651.14683-2-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-22 14:44:47 +02:00
Lara Lazier
b128b25a5a target/i386: Added V_INTR_PRIO check to virtual interrupts
The APM2 states that The processor takes a virtual INTR interrupt
if V_IRQ and V_INTR_PRIO indicate that there is a virtual interrupt pending
whose priority is greater than the value in V_TPR.

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210721152651.14683-1-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-22 14:44:47 +02:00
Peter Maydell
25fc9b79cd target/hexagon: Drop include of qemu.h
The qemu.h file is a CONFIG_USER_ONLY header; it doesn't appear on
the include path for softmmu builds.  Currently we include it
unconditionally in target/hexagon/op_helper.c.  We used to need it
for the put_user_*() and get_user_*() functions, but now that we have
removed the uses of those from op_helper.c, the only reason it's
still there is that we're implicitly relying on it pulling in some
other headers.

Explicitly include the headers we need for other functions, and drop
the include of qemu.h.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20210717103017.20491-1-peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
2021-07-21 15:54:02 -05:00
Taylor Simpson
4699a92779 Hexagon (target/hexagon) remove put_user_*/get_user_*
Replace put_user_* with cpu_st*_data_ra
Replace get_user_* with cpu_ld*_data_ra

Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <1626384156-6248-2-git-send-email-tsimpson@quicinc.com>
2021-07-21 15:53:09 -05:00
Richard Henderson
b5cf742841 accel/tcg: Remove TranslatorOps.breakpoint_check
The hook is now unused, with breakpoints checked outside translation.

Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-21 07:47:05 -10:00
Richard Henderson
e64cb6c231 target/avr: Implement gdb_adjust_breakpoint
Ensure at registration that all breakpoints are in
code space, not data space.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-21 07:47:05 -10:00
Richard Henderson
7b9810ea42 target/i386: Implement debug_check_breakpoint
Return false for RF set, as we do in i386_tr_breakpoint_check.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-21 07:47:05 -10:00