KVM code might have to call functions on the PCIDevice that is
passed to kvm_arch_fixup_msi_route(). This fails in the case
where --without-default-devices is used and no board is
configured. While this is not really a useful configuration,
and therefore setting up stubs for CONFIG_PCI is overkill,
failing the build is impolite. Just include the PCI
subsystem if kvm_arch_fixup_msi_route() requires it, as
is the case for ARM and x86.
Reported-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Fabiano Rosas <farosas@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When emulated with QEMU, interrupts will never come in the following
loop. However, if the NOP instruction is uncommented, interrupts will
fire as normal.
loop:
cli
call do_sti
jmp loop
do_sti:
sti
# nop
ret
This behavior is different from that of a real processor. For example,
if KVM is enabled, interrupts will always fire regardless of whether the
NOP instruction is commented or not. Also, the Intel Software Developer
Manual states that after the STI instruction is executed, the interrupt
inhibit should end as soon as the next instruction (e.g., the RET
instruction if the NOP instruction is commented) is executed.
This problem is caused because the previous code may choose not to end
the TB even if the HF_INHIBIT_IRQ_MASK has just been reset (e.g., in the
case where the STI instruction is immediately followed by the RET
instruction), so that IRQs may not have a change to trigger. This commit
fixes the problem by always terminating the current TB to give IRQs a
chance to trigger when HF_INHIBIT_IRQ_MASK is reset.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Ruihan Li <lrh2000@pku.edu.cn>
Message-ID: <20240415064518.4951-4-lrh2000@pku.edu.cn>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* Fix a possible abort in the "edu" device
* Add missing qga stubs for stand-alone qga builds and re-enable qga-ssh-test
* Fix memory corruption caused by the stm32l4x5 uart device
* Update the s390x custom runner to Ubuntu 22.04
* Fix READ NATIVE MAX ADDRESS IDE commands to avoid a possible crash
* Shorten the runtime of Cirrus-CI jobs
-----BEGIN PGP SIGNATURE-----
iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmYwmaMRHHRodXRoQHJl
ZGhhdC5jb20ACgkQLtnXdP5wLbUCERAAss5PJMG8rI4i4X/3nW49JYTlPOpgm/YX
/UWF+eHUlqaqDdE0s+Pdw4Ozo3hXQt/E/CkcyflUTzVpnZtpv9vkhNWyjOoPV31v
GQyQEzGvxZXl2S595XefyAyaMTP5maBhUTlyZWJo385cQraa60Ot5d4Mibr2CobY
gIBRxEGB/frJYpbHJPxd/FxJV120gtuWAdZwGGYYYjwMzf2IKu2veODB8CnUErlX
WNUsIzjtAslfh8Ek2ZmPzD7uktCUeigkukqIrLC1oEU3wzbJHkISv1kXCKPW/Nf6
ISjVa5TqGwkiiF8fw9aYKvWrnPJS7JkhXw7Gz+b39d846kUdNyDfgLcYJeNS3cZ2
R1xgR9B6hX8ZmikMbGC+0/Sv15u2Yr+bFxJBTJzq6zdOAb9EJNQY1hW2w/Lbrg3X
LjY+ltcVweoSILT6AE6vGDPCHfBzO+6FcptFvw7ePvRGOlwAPZ3tEB9G2LEbCYgg
BjWNP4aRuSfbUebO4x4Todz65WN8aY1EIBXORU/wgUlF2+zajWiOI5JRDKjWz2qQ
gAMeCbLplli5bYrChWtouRIXtb061cQloULddu/SRFcaJOlV3SCzx4JfN15pU90s
jRMIhMESAEj4NSfclhxsOiYp3ywZTvlQsVA6MgPlu2i3HJakQnt5zbg59TesRn2d
r5PfAk83UnA=
=0OB7
-----END PGP SIGNATURE-----
Merge tag 'pull-request-2024-04-30' of https://gitlab.com/thuth/qemu into staging
* Clean-ups for "errp" handling in s390x cpu_model code
* Fix a possible abort in the "edu" device
* Add missing qga stubs for stand-alone qga builds and re-enable qga-ssh-test
* Fix memory corruption caused by the stm32l4x5 uart device
* Update the s390x custom runner to Ubuntu 22.04
* Fix READ NATIVE MAX ADDRESS IDE commands to avoid a possible crash
* Shorten the runtime of Cirrus-CI jobs
# -----BEGIN PGP SIGNATURE-----
#
# iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmYwmaMRHHRodXRoQHJl
# ZGhhdC5jb20ACgkQLtnXdP5wLbUCERAAss5PJMG8rI4i4X/3nW49JYTlPOpgm/YX
# /UWF+eHUlqaqDdE0s+Pdw4Ozo3hXQt/E/CkcyflUTzVpnZtpv9vkhNWyjOoPV31v
# GQyQEzGvxZXl2S595XefyAyaMTP5maBhUTlyZWJo385cQraa60Ot5d4Mibr2CobY
# gIBRxEGB/frJYpbHJPxd/FxJV120gtuWAdZwGGYYYjwMzf2IKu2veODB8CnUErlX
# WNUsIzjtAslfh8Ek2ZmPzD7uktCUeigkukqIrLC1oEU3wzbJHkISv1kXCKPW/Nf6
# ISjVa5TqGwkiiF8fw9aYKvWrnPJS7JkhXw7Gz+b39d846kUdNyDfgLcYJeNS3cZ2
# R1xgR9B6hX8ZmikMbGC+0/Sv15u2Yr+bFxJBTJzq6zdOAb9EJNQY1hW2w/Lbrg3X
# LjY+ltcVweoSILT6AE6vGDPCHfBzO+6FcptFvw7ePvRGOlwAPZ3tEB9G2LEbCYgg
# BjWNP4aRuSfbUebO4x4Todz65WN8aY1EIBXORU/wgUlF2+zajWiOI5JRDKjWz2qQ
# gAMeCbLplli5bYrChWtouRIXtb061cQloULddu/SRFcaJOlV3SCzx4JfN15pU90s
# jRMIhMESAEj4NSfclhxsOiYp3ywZTvlQsVA6MgPlu2i3HJakQnt5zbg59TesRn2d
# r5PfAk83UnA=
# =0OB7
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 30 Apr 2024 12:11:31 AM PDT
# gpg: using RSA key 27B88847EEE0250118F3EAB92ED9D774FE702DB5
# gpg: issuer "thuth@redhat.com"
# gpg: Good signature from "Thomas Huth <th.huth@gmx.de>" [full]
# gpg: aka "Thomas Huth <thuth@redhat.com>" [full]
# gpg: aka "Thomas Huth <th.huth@posteo.de>" [unknown]
# gpg: aka "Thomas Huth <huth@tuxfamily.org>" [full]
* tag 'pull-request-2024-04-30' of https://gitlab.com/thuth/qemu:
.gitlab-ci.d/cirrus: Remove the netbsd and openbsd jobs
.gitlab-ci.d/cirrus.yml: Shorten the runtime of the macOS and FreeBSD jobs
tests/qtest/ide-test: Verify READ NATIVE MAX ADDRESS is not limited
hw/ide/core.c (cmd_read_native_max): Avoid limited device parameters
gitlab: remove stale s390x-all-linux-static conf hacks
gitlab: migrate the s390x custom machine to 22.04
build-environment: make some packages optional
hw/char/stm32l4x5_usart: Fix memory corruption by adding correct class_size
qga: Re-enable the qga-ssh-test when running without fuzzing
stubs: Add missing qga stubs
hw: misc: edu: use qemu_log_mask instead of hw_error
hw: misc: edu: rename local vars in edu_check_range
hw: misc: edu: fix 2 off-by-one errors
target/s390x/cpu_models_sysemu: Drop local @err in apply_cpu_model()
target/s390x/cpu_models: Make kvm_s390_apply_cpu_model() return boolean
target/s390x/cpu_models: Drop local @err in get_max_cpu_model()
target/s390x/cpu_models: Make kvm_s390_get_host_cpu_model() return boolean
target/s390x/cpu_model: Drop local @err in s390_realize_cpu_model()
target/s390x/cpu_model: Make check_compatibility() return boolean
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
In previous versions of the Arm architecture, the frequency of the
generic timers as reported in CNTFRQ_EL0 could be any IMPDEF value,
and for QEMU we picked 62.5MHz, giving a timer tick period of 16ns.
In Armv8.6, the architecture standardized this frequency to 1GHz.
Because there is no ID register feature field that indicates whether
a CPU is v8.6 or that it ought to have this counter frequency, we
implement this by changing our default CNTFRQ value for all CPUs,
with exceptions for backwards compatibility:
* CPU types which we already implement will retain the old
default value. None of these are v8.6 CPUs, so this is
architecturally OK.
* CPUs used in versioned machine types with a version of 9.0
or earlier will retain the old default value.
The upshot is that the only CPU type that changes is 'max'; but any
new type we add in future (whether v8.6 or not) will also get the new
1GHz default.
It remains the case that the machine model can override the default
value via the 'cntfrq' QOM property (regardless of the CPU type).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20240426122913.3427983-5-peter.maydell@linaro.org
The generic timer frequency is settable by board code via a QOM
property "cntfrq", but otherwise defaults to 62.5MHz. The way this
is done includes some complication resulting from how this was
originally a fixed value with no QOM property. Clean it up:
* always set cpu->gt_cntfrq_hz to some sensible value, whether
the CPU has the generic timer or not, and whether it's system
or user-only emulation
* this means we can always use gt_cntfrq_hz, and never need
the old GTIMER_SCALE define
* set the default value in exactly one place, in the realize fn
The aim here is to pave the way for handling the ARMv8.6 requirement
that the generic timer frequency is always 1GHz. We're going to do
that by having old CPU types keep their legacy-in-QEMU behaviour and
having the default for any new CPU types be a 1GHz rather han 62.5MHz
cntfrq, so we want the point where the default is decided to be in
one place, and in code, not in a DEFINE_PROP_UINT64() initializer.
This commit should have no behavioural changes.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240426122913.3427983-2-peter.maydell@linaro.org
FEAT_Spec_FPACC is a feature describing speculative behaviour in the
event of a PAC authontication failure when FEAT_FPACCOMBINE is
implemented. FEAT_Spec_FPACC means that the speculative use of
pointers processed by a PAC Authentication is not materially
different in terms of the impact on cached microarchitectural state
(caches, TLBs, etc) between passing and failing of the PAC
Authentication.
QEMU doesn't do speculative execution, so we can advertise
this feature.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20240418152004.2106516-6-peter.maydell@linaro.org
Newer versions of the Arm ARM (e.g. rev K.a) now define fields for
ID_AA64MMFR3_EL1. Implement this register, so that we can set the
fields if we need to. There's no behaviour change here since we
don't currently set the register value to non-zero.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20240418152004.2106516-5-peter.maydell@linaro.org
FEAT_ETS2 is a tighter set of guarantees about memory ordering
involving translation table walks than the old FEAT_ETS; FEAT_ETS has
been retired from the Arm ARM and the old ID_AA64MMFR1.ETS == 1
now gives no greater guarantees than ETS == 0.
FEAT_ETS2 requires:
* the virtual address of a load or store that appears in program
order after a DSB cannot be translated until after the DSB
completes (section B2.10.9)
* TLB maintenance operations that only affect translations without
execute permission are guaranteed complete after a DSB
(R_BLDZX)
* if a memory access RW2 is ordered-before memory access RW2,
then RW1 is also ordered-before any translation table walk
generated by RW2 that generates a Translation, Address size
or Access flag fault (R_NNFPF, I_CLGHP)
As with FEAT_ETS, QEMU is already compliant, because we do not
reorder translation table walk memory accesses relative to other
memory accesses, and we always guarantee to have finished TLB
maintenance as soon as the TLB op is done.
Update the documentation to list FEAT_ETS2 instead of the
no-longer-existent FEAT_ETS, and update the 'max' CPU ID registers.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20240418152004.2106516-4-peter.maydell@linaro.org
FEAT_CSV2_3 adds a mechanism to identify if hardware cannot disclose
information about whether branch targets and branch history trained
in one hardware described context can control speculative execution
in a different hardware context.
There is no branch prediction in TCG, so we don't need to do anything
to be compliant with this. Upadte the '-cpu max' ID registers to
advertise the feature.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20240418152004.2106516-3-peter.maydell@linaro.org
For cpus using PMSA, when the MPU is disabled, the default memory
type is Normal, Non-cachable. This means that it should not
have alignment restrictions enforced.
Cc: qemu-stable@nongnu.org
Fixes: 59754f85ed ("target/arm: Do memory type alignment check when translation disabled")
Reported-by: Clément Chigot <chigot@adacore.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Clément Chigot <chigot@adacore.com>
Message-id: 20240422170722.117409-1-richard.henderson@linaro.org
[PMM: trivial comment, commit message tweaks]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
As it had never been used since the first commit a1477da3dd ("hvf: Add
Apple Silicon support").
Signed-off-by: Zenghui Yu <zenghui.yu@linux.dev>
Message-id: 20240422092715.71973-1-zenghui.yu@linux.dev
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use @errp to fetch error information directly and drop the local
variable @err.
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20240425031232.1586401-8-zhao1.liu@intel.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
As error.h suggested, the best practice for callee is to return
something to indicate success / failure.
So make kvm_s390_apply_cpu_model() return boolean and check the
returned boolean in apply_cpu_model() instead of accessing @err.
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20240425031232.1586401-7-zhao1.liu@intel.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Use @errp to fetch error information directly and drop the local
variable @err.
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20240425031232.1586401-6-zhao1.liu@intel.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
As error.h suggested, the best practice for callee is to return
something to indicate success / failure.
So make kvm_s390_get_host_cpu_model() return boolean and check the
returned boolean in get_max_cpu_model() instead of accessing @err.
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20240425031232.1586401-5-zhao1.liu@intel.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Use @errp to fetch error information directly and drop the local
variable @err.
Suggested-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20240425031232.1586401-3-zhao1.liu@intel.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
As error.h suggested, the best practice for callee is to return
something to indicate success / failure.
With returned boolean, there's no need to check @err.
Suggested-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20240425031232.1586401-2-zhao1.liu@intel.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Commit d424db2354 excluded some strerrorname_np() instances because they
break musl libc builds. Another instance happened to slip by via commit
d4ff3da8f4.
Remove it before it causes trouble again.
Fixes: d4ff3da8f4 (target/riscv/kvm: initialize 'vlenb' via get-reg-list)
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Fixes: 1590154ee4 ("target/loongarch: Fix qemu-system-loongarch64 assert failed with the option '-d int'")
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Printing a "PowerPC" in front of each CPU name is not helpful at all:
It is confusing for the users since they don't know whether they
have to specify these letters for the "-cpu" parameter, too, and
it also takes some precious space in the dense output of the CPU
entries. Let's simply remove this now and use two spaces at the
beginning of the lines for the indentation of the entries instead,
and add a "Available CPUs" in the very first line, like most other
target architectures are doing it for their CPU help output already.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Printing an "s390x" in front of each CPU name is not helpful at all:
It is confusing for the users since they don't know whether they
have to specify these letters for the "-cpu" parameter, too, and
it also takes some precious space in the dense output of the CPU
entries. Let's simply remove this now!
While we're at it, use two spaces at the beginning of the lines for
the indentation of the entries, and add a "Available CPUs" in the
very first line, like most other target architectures are doing it
for their "-cpu help" output already.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Printing an "x86" in front of each CPU name is not helpful at all:
It is confusing for the users since they don't know whether they
have to specify these letters for the "-cpu" parameter, too, and
it also takes some precious space in the dense output of the CPU
entries. Let's simply remove this now and use two spaces at the
beginning of the lines for the indentation of the entries instead,
like most other target architectures are doing it for their CPU help
output already.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Add init_cmline and set boot_info->a0, a1
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Message-Id: <20240426091551.2397867-5-gaosong@loongson.cn>
The CPUBreakpoint and CPUWatchpoint structures are declared
in "hw/core/cpu.h", which contains declarations related to
CPUState and CPUClass. Some source files only require the
BP/WP definitions and don't need to pull in all CPU* API.
In order to simplify, create a new "exec/breakpoint.h" header.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20240418192525.97451-3-philmd@linaro.org>
HVF has a specific use of the CPUState::vcpu_dirty field
(CPUState::vcpu_dirty is not used by common code).
To make this field accel-specific, add and use a new
@dirty variable in the AccelCPUState structure.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240424174506.326-4-philmd@linaro.org>
NVMM has a specific use of the CPUState::vcpu_dirty field
(CPUState::vcpu_dirty is not used by common code).
To make this field accel-specific, add and use a new
@dirty variable in the AccelCPUState structure.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240424174506.326-3-philmd@linaro.org>
WHPX has a specific use of the CPUState::vcpu_dirty field
(CPUState::vcpu_dirty is not used by common code).
To make this field accel-specific, add and use a new
@dirty variable in the AccelCPUState structure.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240424174506.326-2-philmd@linaro.org>
The XRSTOR instruction ends calling tlb_flush(), declared
in "exec/exec-all.h".
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20231211212003.21686-13-philmd@linaro.org>
We have abi_ulong == uint32_t for the 32-bit ABI.
Use the generic type to avoid to depend on the
"exec/user/abitypes.h" header.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240418192525.97451-14-philmd@linaro.org>
'abi_ptr' is a user specific type. The system emulation
equivalent is 'target_ulong'. Use it in ppc_ldl_code()
to emphasis this is not an user emulation function.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Acked-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20231211212003.21686-18-philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
accel/tcg/ files requires the following definitions:
- TARGET_LONG_BITS
- TARGET_PAGE_BITS
- TARGET_PHYS_ADDR_SPACE_BITS
- TCG_GUEST_DEFAULT_MO
The first 3 are defined in "cpu-param.h". The last one
in "cpu.h", with a bunch of definitions irrelevant for
TCG. By moving the TCG_GUEST_DEFAULT_MO definition to
"cpu-param.h", we can simplify various accel/tcg includes.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20231211212003.21686-4-philmd@linaro.org>
We only need the "exec/tswap.h" and "cpu-param.h" headers.
Only include "cpu.h" in the target gdbstub.c source files.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240418192525.97451-20-philmd@linaro.org>
Theses files call cpu_ldl_code() which is declared
in "exec/cpu_ldst.h".
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20231211212003.21686-5-philmd@linaro.org>
User-only objects might benefit from the "exec/target_page.h"
API, which allows to build some objects once for all targets.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20231211212003.21686-3-philmd@linaro.org>
'NEED_CPU_H' guard target-specific code; it is defined by meson
altogether with the 'CONFIG_TARGET' definition. Rename NEED_CPU_H
as COMPILING_PER_TARGET to clarify its meaning.
Mechanical change running:
$ sed -i s/NEED_CPU_H/COMPILING_PER_TARGET/g $(git grep -l NEED_CPU_H)
then manually add a /* COMPILING_PER_TARGET */ comment
after the '#endif' when the block is large.
Inspired-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240322161439.6448-4-philmd@linaro.org>
* Implement FEAT_NMI and NMI support in the GICv3
* hw/dma: avoid apparent overflow in soc_dma_set_request
* linux-user/flatload.c: Remove unused bFLT shared-library and ZFLAT code
* Add ResetType argument to Resettable hold and exit phase methods
* Add RESET_TYPE_SNAPSHOT_LOAD ResetType
* Implement STM32L4x5 USART
-----BEGIN PGP SIGNATURE-----
iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmYqMhMZHHBldGVyLm1h
eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3uVlD/47U3zYP33y4+wJcRScC0QI
jYd82jS7GhD5YP5QPrIEMaSbDwtYGi4Rez1taaHvZ2fWLg2gE973iixmTaM2mXCd
xPEqMsRXkFrQnC89K5/v9uR04AvHxoM8J2mD2OKnUT0RVBs38WxCUMLETBsD18/q
obs1RzDRhEs5BnwwPMm5HI1iQeVvDRe/39O3w3rZfA8DuqerrNOQWuJd43asHYjO
Gc1QzCGhALlXDoqk11IzjhJ7es8WbJ5XGvrSNe9QLGNJwNsu9oi1Ez+5WK2Eht9r
eRvGNFjH4kQY1YCShZjhWpdzU9KT0+80KLirMJFcI3vUztrYZ027/rMyKLHVOybw
YAqgEUELwoGVzacpaJg73f77uknKoXrfTH25DfoLX0yFCB35JHOPcjU4Uq1z1pfV
I80ZcJBDJ95mXPfyKLrO+0IyVBztLybufedK2aiH16waEGDpgsJv66FB2QRuQBYW
O0i6/4DEUZmfSpOmr8ct+julz7wCWSjbvo6JFWxzzxvD0M5T3AFKXZI244g1SMdh
LS8V7WVCVzVJ5mK8Ujp2fVaIIxiBzlXVZrQftWv5rhyDOiIIeP8pdekmPlI6p5HK
3/2efzSYNL2UCDZToIq24El/3md/7vHR6DBfBT1/pagxWUstqqLgkJO42jQtTG0E
JY1cZ/EQY7cqXGrww8lhWA==
=WEsU
-----END PGP SIGNATURE-----
Merge tag 'pull-target-arm-20240425' of https://git.linaro.org/people/pmaydell/qemu-arm into staging
target-arm queue:
* Implement FEAT_NMI and NMI support in the GICv3
* hw/dma: avoid apparent overflow in soc_dma_set_request
* linux-user/flatload.c: Remove unused bFLT shared-library and ZFLAT code
* Add ResetType argument to Resettable hold and exit phase methods
* Add RESET_TYPE_SNAPSHOT_LOAD ResetType
* Implement STM32L4x5 USART
# -----BEGIN PGP SIGNATURE-----
#
# iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmYqMhMZHHBldGVyLm1h
# eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3uVlD/47U3zYP33y4+wJcRScC0QI
# jYd82jS7GhD5YP5QPrIEMaSbDwtYGi4Rez1taaHvZ2fWLg2gE973iixmTaM2mXCd
# xPEqMsRXkFrQnC89K5/v9uR04AvHxoM8J2mD2OKnUT0RVBs38WxCUMLETBsD18/q
# obs1RzDRhEs5BnwwPMm5HI1iQeVvDRe/39O3w3rZfA8DuqerrNOQWuJd43asHYjO
# Gc1QzCGhALlXDoqk11IzjhJ7es8WbJ5XGvrSNe9QLGNJwNsu9oi1Ez+5WK2Eht9r
# eRvGNFjH4kQY1YCShZjhWpdzU9KT0+80KLirMJFcI3vUztrYZ027/rMyKLHVOybw
# YAqgEUELwoGVzacpaJg73f77uknKoXrfTH25DfoLX0yFCB35JHOPcjU4Uq1z1pfV
# I80ZcJBDJ95mXPfyKLrO+0IyVBztLybufedK2aiH16waEGDpgsJv66FB2QRuQBYW
# O0i6/4DEUZmfSpOmr8ct+julz7wCWSjbvo6JFWxzzxvD0M5T3AFKXZI244g1SMdh
# LS8V7WVCVzVJ5mK8Ujp2fVaIIxiBzlXVZrQftWv5rhyDOiIIeP8pdekmPlI6p5HK
# 3/2efzSYNL2UCDZToIq24El/3md/7vHR6DBfBT1/pagxWUstqqLgkJO42jQtTG0E
# JY1cZ/EQY7cqXGrww8lhWA==
# =WEsU
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 25 Apr 2024 03:36:03 AM PDT
# gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg: issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [full]
# gpg: aka "Peter Maydell <pmaydell@gmail.com>" [full]
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [full]
# gpg: aka "Peter Maydell <peter@archaic.org.uk>" [unknown]
* tag 'pull-target-arm-20240425' of https://git.linaro.org/people/pmaydell/qemu-arm: (37 commits)
tests/qtest: Add tests for the STM32L4x5 USART
hw/arm: Add the USART to the stm32l4x5 SoC
hw/char/stm32l4x5_usart: Add options for serial parameters setting
hw/char/stm32l4x5_usart: Enable serial read and write
hw/char: Implement STM32L4x5 USART skeleton
reset: Add RESET_TYPE_SNAPSHOT_LOAD
docs/devel/reset: Update to new API for hold and exit phase methods
hw, target: Add ResetType argument to hold and exit phase methods
scripts/coccinelle: New script to add ResetType to hold and exit phases
allwinner-i2c, adm1272: Use device_cold_reset() for software-triggered reset
hw/misc: Don't special case RESET_TYPE_COLD in npcm7xx_clk, gcr
linux-user/flatload.c: Remove unused bFLT shared-library and ZFLAT code
hw/dma: avoid apparent overflow in soc_dma_set_request
hw/arm/virt: Enable NMI support in the GIC if the CPU has FEAT_NMI
target/arm: Add FEAT_NMI to max
hw/intc/arm_gicv3: Report the VINMI interrupt
hw/intc/arm_gicv3: Report the NMI interrupt in gicv3_cpuif_update()
hw/intc/arm_gicv3: Implement NMI interrupt priority
hw/intc/arm_gicv3: Handle icv_nmiar1_read() for icc_nmiar1_read()
hw/intc/arm_gicv3: Add NMI handling CPU interface registers
...
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Since the calls are elided when KVM is not available,
we can remove the stubs (which are never compiled).
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20240419090631.48055-1-philmd@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
We pass a ResetType argument to the Resettable class enter
phase method, but we don't pass it to hold and exit, even though
the callsites have it readily available. This means that if
a device cared about the ResetType it would need to record it
in the enter phase method to use later on. Pass the type to
all three of the phase methods to avoid having to do that.
Commit created with
for dir in hw target include; do \
spatch --macro-file scripts/cocci-macro-file.h \
--sp-file scripts/coccinelle/reset-type.cocci \
--keep-comments --smpl-spacing --in-place \
--include-headers --dir $dir; done
and no manual edits.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Luc Michel <luc.michel@amd.com>
Message-id: 20240412160809.1260625-5-peter.maydell@linaro.org
Enable FEAT_NMI on the 'max' CPU.
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240407081733.3231820-24-ruanjinjie@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
According to Arm GIC section 4.6.3 Interrupt superpriority, the interrupt
with superpriority is always IRQ, never FIQ, so the NMI exception trap entry
behave like IRQ. And VINMI(vIRQ with Superpriority) can be raised from the
GIC or come from the hcrx_el2.HCRX_VINMI bit, VFNMI(vFIQ with Superpriority)
come from the hcrx_el2.HCRX_VFNMI bit.
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240407081733.3231820-13-ruanjinjie@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Set or clear PSTATE.ALLINT on taking an exception to ELx according to the
SCTLR_ELx.SPINTMASK bit.
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240407081733.3231820-10-ruanjinjie@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add IS and FS bit in ISR_EL1 and handle the read. With CPU_INTERRUPT_NMI or
CPU_INTERRUPT_VINMI, both CPSR_I and ISR_IS must be set. With
CPU_INTERRUPT_VFNMI, both CPSR_F and ISR_FS must be set.
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240407081733.3231820-9-ruanjinjie@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
According to Arm GIC section 4.6.3 Interrupt superpriority, the interrupt
with superpriority is always IRQ, never FIQ, so handle NMI same as IRQ in
arm_phys_excp_target_el().
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240407081733.3231820-8-ruanjinjie@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This only implements the external delivery method via the GICv3.
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240407081733.3231820-7-ruanjinjie@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add ALLINT MSR (immediate) to decodetree, in which the CRm is 0b000x. The
EL0 check is necessary to ALLINT, and the EL1 check is necessary when
imm == 1. So implement it inline for EL2/3, or EL1 with imm==0. Avoid the
unconditional write to pc and use raise_exception_ra to unwind.
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240407081733.3231820-5-ruanjinjie@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add support for FEAT_NMI. NMI (FEAT_NMI) is an mandatory feature in
ARMv8.8-A and ARM v9.3-A.
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240407081733.3231820-4-ruanjinjie@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When PSTATE.ALLINT is set, an IRQ or FIQ interrupt that is targeted to
ELx, with or without superpriority is masked. As Richard suggested, place
ALLINT bit in PSTATE in env->pstate.
In the pseudocode, AArch64.ExceptionReturn() calls SetPSTATEFromPSR(), which
treats PSTATE.ALLINT as one of the bits which are reinstated from SPSR to
PSTATE regardless of whether this is an illegal exception return or not. So
handle PSTATE.ALLINT the same way as PSTATE.DAIF in the illegal_return exit
path of the exception_return helper. With the change, exception entry and
return are automatically handled.
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240407081733.3231820-3-ruanjinjie@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
FEAT_NMI defines another three new bits in HCRX_EL2: TALLINT, HCRX_VINMI and
HCRX_VFNMI. When the feature is enabled, allow these bits to be written in
HCRX_EL2.
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240407081733.3231820-2-ruanjinjie@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move APIC related code split in cpu-sysemu.c and
monitor.c to cpu-apic.c.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Message-Id: <20240321154838.95771-4-philmd@linaro.org>
According to the m68k semihosting spec:
"The instruction used to trigger a semihosting request depends on the
m68k processor variant. On ColdFire, "halt" is used; on other processors
(which don't implement "halt"), "bkpt #0" may be used."
Add support for non-CodeFire processors by matching BKPT #0 instructions.
Signed-off-by: Keith Packard <keithp@keithp.com>
[rth: Use semihosting_test()]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Replace EXCP_HALT_INSN by EXCP_SEMIHOSTING. Perform the pre-
and post-insn tests during translate, leaving only the actual
semihosting operation for the exception.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Instead of using d0 (the semihost function number), use d1 (the
provide exit status).
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20230802161914.395443-2-keithp@keithp.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The Nios II target is deprecated since v8.2 in commit 9997771bc1
("target/nios2: Deprecate the Nios II architecture").
Remove:
- Buildsys / CI infra
- User emulation
- System emulation (10m50-ghrd & nios2-generic-nommu machines)
- Tests
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Marek Vasut <marex@denx.de>
Message-Id: <20240327144806.11319-3-philmd@linaro.org>
The various Intel CPU manuals claim that SGDT and SIDT can write either 24-bits
or 32-bits depending upon the operand size, but this is incorrect. Not only do
the Intel CPU manuals give contradictory information between processor
revisions, but this information doesn't even match real-life behaviour.
In fact, tests on real hardware show that the CPU always writes 32-bits for SGDT
and SIDT, and this behaviour is required for at least OS/2 Warp and WFW 3.11 with
Win32s to function correctly. Remove the masking applied due to the operand size
for SGDT and SIDT so that the TCG behaviour matches the behaviour on real
hardware.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2198
--
MCA: Whilst I don't have a copy of OS/2 Warp handy, I've confirmed that this
patch fixes the issue in WFW 3.11 with Win32s. For more technical information I
highly recommend the excellent write-up at
https://www.os2museum.com/wp/sgdtsidt-fiction-and-reality/.
Message-ID: <20240419195147.434894-1-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Currently, the difference between warn_report_once() and
error_report_once() is the former has the "warning:" prefix, while the
latter does not have a similar level prefix.
At the meantime, considering that there is no error handling logic here,
and the purpose of error_report_once() is only to prompt the user with
an abnormal message, there is no need to use an error-level message here,
and instead we can just use a warning.
Therefore, downgrade the message in error_report_once() to warning, and
merge it into the previous warn_report_once().
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Message-ID: <20240327103951.3853425-4-zhao1.liu@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The difference between error_printf() and error_report() is the latter
may contain more information, such as the name of the program
("qemu-system-x86_64").
Thus its variant error_report_once() and warn_report()'s variant
warn_report_once() can be used here to print the information only once
without a static local variable "ht_warned".
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Message-ID: <20240327103951.3853425-3-zhao1.liu@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use warn_report_once() to get rid of the static local variable "warned".
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Message-ID: <20240327103951.3853425-2-zhao1.liu@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Newer 9.1 machine types will default to using the KVM_SEV_INIT2 API for
creating SEV/SEV-ES going forward. However, this API results in guest
measurement changes which are generally not expected for users of these
older guest types and can cause disruption if they switch to a newer
QEMU/kernel version. Avoid this by continuing to use the older
KVM_SEV_INIT/KVM_SEV_ES_INIT APIs for older machine types.
Signed-off-by: Michael Roth <michael.roth@amd.com>
Message-ID: <20240409230743.962513-4-michael.roth@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
QEMU will currently automatically make use of the KVM_SEV_INIT2 API for
initializing SEV and SEV-ES guests verses the older
KVM_SEV_INIT/KVM_SEV_ES_INIT interfaces.
However, the older interfaces will silently avoid sync'ing FPU/XSAVE
state to the VMSA prior to encryption, thus relying on behavior and
measurements that assume the related fields to be allow zero.
With KVM_SEV_INIT2, this state is now synced into the VMSA, resulting in
measurements changes and, theoretically, behaviorial changes, though the
latter are unlikely to be seen in practice.
To allow a smooth transition to the newer interface, while still
providing a mechanism to maintain backward compatibility with VMs
created using the older interfaces, provide a new command-line
parameter:
-object sev-guest,legacy-vm-type=true,...
and have it default to false.
Signed-off-by: Michael Roth <michael.roth@amd.com>
Message-ID: <20240409230743.962513-2-michael.roth@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Implement support for the KVM_X86_SEV_VM and KVM_X86_SEV_ES_VM virtual
machine types, and the KVM_SEV_INIT2 function of KVM_MEMORY_ENCRYPT_OP.
These replace the KVM_SEV_INIT and KVM_SEV_ES_INIT functions, and have
several advantages:
- sharing the initialization sequence with SEV-SNP and TDX
- allowing arguments including the set of desired VMSA features
- protection against invalid use of KVM_GET/SET_* ioctls for guests
with encrypted state
If the KVM_X86_SEV_VM and KVM_X86_SEV_ES_VM types are not supported,
fall back to KVM_SEV_INIT and KVM_SEV_ES_INIT (which use the
default x86 VM type).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
KVM is introducing a new API to create confidential guests, which
will be used by TDX and SEV-SNP but is also available for SEV and
SEV-ES. The API uses the VM type argument to KVM_CREATE_VM to
identify which confidential computing technology to use.
Since there are no other expected uses of VM types, delegate
mc->kvm_type() for x86 boards to the confidential-guest-support
object pointed to by ms->cgs.
For example, if a sev-guest object is specified to confidential-guest-support,
like,
qemu -machine ...,confidential-guest-support=sev0 \
-object sev-guest,id=sev0,...
it will check if a VM type KVM_X86_SEV_VM or KVM_X86_SEV_ES_VM
is supported, and if so use them together with the KVM_SEV_INIT2
function of the KVM_MEMORY_ENCRYPT_OP ioctl. If not, it will fall back to
KVM_SEV_INIT and KVM_SEV_ES_INIT.
This is a preparatory work towards TDX and SEV-SNP support, but it
will also enable support for VMSA features such as DebugSwap, which
are only available via KVM_SEV_INIT2.
Co-developed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Introduce a common superclass for x86 confidential guest implementations.
It will extend ConfidentialGuestSupportClass with a method that provides
the VM type to be passed to KVM_CREATE_VM.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Board reset requires writing a fresh CPU state. As far as KVM is
concerned, the only thing that blocks reset is that CPU state is
encrypted; therefore, kvm_cpus_are_resettable() can simply check
if that is the case.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
So far, KVM has allowed KVM_GET/SET_* ioctls to execute even if the
guest state is encrypted, in which case they do nothing. For the new
API using VM types, instead, the ioctls will fail which is a safer and
more robust approach.
The new API will be the only one available for SEV-SNP and TDX, but it
is also usable for SEV and SEV-ES. In preparation for that, require
architecture-specific KVM code to communicate the point at which guest
state is protected (which must be after kvm_cpu_synchronize_post_init(),
though that might change in the future in order to suppor migration).
From that point, skip reading registers so that cpu->vcpu_dirty is
never true: if it ever becomes true, kvm_arch_put_registers() will
fail miserably.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use unified confidential_guest_kvm_init() for consistency with
other architectures.
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Message-Id: <20240229060038.606591-1-xiaoyao.li@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use confidential_guest_kvm_init() instead of calling SEV
specific sev_kvm_init(). This allows the introduction of multiple
confidential-guest-support subclasses for different x86 vendors.
As a bonus, stubs are not needed anymore since there is no
direct call from target/i386/kvm/kvm.c to SEV code.
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Message-Id: <20240229060038.606591-1-xiaoyao.li@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Register File Data Sampling (RFDS) is a CPU side-channel vulnerability
that may expose stale register value. CPUs that set RFDS_NO bit in MSR
IA32_ARCH_CAPABILITIES indicate that they are not vulnerable to RFDS.
Similarly, RFDS_CLEAR indicates that CPU is affected by RFDS, and has
the microcode to help mitigate RFDS.
Make RFDS_CLEAR and RFDS_NO bits available to guests.
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Message-ID: <9a38877857392b5c2deae7e7db1b170d15510314.1710341348.git.pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
According to table 1-2 in Intel Architecture Instruction Set Extensions and
Future Features (rev 051) [1], SierraForest has the following new features
which have already been virtualized:
- CMPCCXADD CPUID.(EAX=7,ECX=1):EAX[bit 7]
- AVX-IFMA CPUID.(EAX=7,ECX=1):EAX[bit 23]
- AVX-VNNI-INT8 CPUID.(EAX=7,ECX=1):EDX[bit 4]
- AVX-NE-CONVERT CPUID.(EAX=7,ECX=1):EDX[bit 5]
Add above features to new CPU model SierraForest. Comparing with GraniteRapids
CPU model, SierraForest bare-metal removes the following features:
- HLE CPUID.(EAX=7,ECX=0):EBX[bit 4]
- RTM CPUID.(EAX=7,ECX=0):EBX[bit 11]
- AVX512F CPUID.(EAX=7,ECX=0):EBX[bit 16]
- AVX512DQ CPUID.(EAX=7,ECX=0):EBX[bit 17]
- AVX512_IFMA CPUID.(EAX=7,ECX=0):EBX[bit 21]
- AVX512CD CPUID.(EAX=7,ECX=0):EBX[bit 28]
- AVX512BW CPUID.(EAX=7,ECX=0):EBX[bit 30]
- AVX512VL CPUID.(EAX=7,ECX=0):EBX[bit 31]
- AVX512_VBMI CPUID.(EAX=7,ECX=0):ECX[bit 1]
- AVX512_VBMI2 CPUID.(EAX=7,ECX=0):ECX[bit 6]
- AVX512_VNNI CPUID.(EAX=7,ECX=0):ECX[bit 11]
- AVX512_BITALG CPUID.(EAX=7,ECX=0):ECX[bit 12]
- AVX512_VPOPCNTDQ CPUID.(EAX=7,ECX=0):ECX[bit 14]
- LA57 CPUID.(EAX=7,ECX=0):ECX[bit 16]
- TSXLDTRK CPUID.(EAX=7,ECX=0):EDX[bit 16]
- AMX-BF16 CPUID.(EAX=7,ECX=0):EDX[bit 22]
- AVX512_FP16 CPUID.(EAX=7,ECX=0):EDX[bit 23]
- AMX-TILE CPUID.(EAX=7,ECX=0):EDX[bit 24]
- AMX-INT8 CPUID.(EAX=7,ECX=0):EDX[bit 25]
- AVX512_BF16 CPUID.(EAX=7,ECX=1):EAX[bit 5]
- fast zero-length MOVSB CPUID.(EAX=7,ECX=1):EAX[bit 10]
- fast short CMPSB, SCASB CPUID.(EAX=7,ECX=1):EAX[bit 12]
- AMX-FP16 CPUID.(EAX=7,ECX=1):EAX[bit 21]
- PREFETCHI CPUID.(EAX=7,ECX=1):EDX[bit 14]
- XFD CPUID.(EAX=0xD,ECX=1):EAX[bit 4]
- EPT_PAGE_WALK_LENGTH_5 VMX_EPT_VPID_CAP(0x48c)[bit 7]
Add all features of GraniteRapids CPU model except above features to
SierraForest CPU model.
SierraForest doesn’t support TSX and RTM but supports TAA_NO. When RTM is
not enabled in host, KVM will not report TAA_NO. So, just don't include
TAA_NO in SierraForest CPU model.
[1] https://cdrdv2.intel.com/v1/dl/getContent/671368
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Tao Su <tao1.su@linux.intel.com>
Message-ID: <20240320021044.508263-1-tao1.su@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When start L2 guest with both L1/L2 using Icelake-Server-v3 or above,
QEMU reports below warning:
"warning: host doesn't support requested feature: MSR(10AH).taa-no [bit 8]"
Reason is QEMU Icelake-Server-v3 has TSX feature disabled but enables taa-no
bit. It's meaningless that TSX isn't supported but still claim TSX is secure.
So L1 KVM doesn't expose taa-no to L2 if TSX is unsupported, then starting L2
triggers the warning.
Fix it by introducing a new version Icelake-Server-v7 which has both TSX
and taa-no features. Then guest can use TSX securely when it see taa-no.
This matches the production Icelake which supports TSX and isn't susceptible
to TSX Async Abort (TAA) vulnerabilities, a.k.a, taa-no.
Ideally, TSX should have being enabled together with taa-no since v3, but for
compatibility, we'd better to add v7 to enable it.
Fixes: d965dc3559 ("target/i386: Add ARCH_CAPABILITIES related bits into Icelake-Server CPU model")
Tested-by: Xiangfei Ma <xiangfeix.ma@intel.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Message-ID: <20240320093138.80267-2-zhenzhong.duan@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Move the architectural (for lack of a better term) CPUID leaf generation
to a separate helper so that the generation code can be reused by TDX,
which needs to generate a canonical VM-scoped configuration.
For now this is just a cleanup, so keep the function static.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Message-ID: <20240229063726.610065-23-xiaoyao.li@intel.com>
Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Query kvm for supported guest physical address bits, in cpuid
function 80000008, eax[23:16]. Usually this is identical to host
physical address bits. With NPT or EPT being used this might be
restricted to 48 (max 4-level paging address space size) even if
the host cpu supports more physical address bits.
When set pass this to the guest, using cpuid too. Guest firmware
can use this to figure how big the usable guest physical address
space is, so PCI bar mapping are actually reachable.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Message-ID: <20240318155336.156197-2-kraxel@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Allows to set guest-phys-bits (cpuid leaf 80000008, eax[23:16])
via -cpu $model,guest-phys-bits=$nr.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-ID: <20240318155336.156197-3-kraxel@redhat.com>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
To keep the multiple update check, replace insn_start
with insn_start_updated.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
When aborting translation of the current insn, restore the
previous value of insn_start.
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Tested-by: Jørgen Hansen <Jorgen.Hansen@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
To keep the multiple update check, replace insn_start
with insn_start_updated.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
To keep the multiple update check, replace insn_start
with insn_start_updated.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Add helpers for reading/writing the 68881 FPSR register so that
changes in floating point exception state can be seen by the
application.
Call these helpers in pre_load/post_load hooks to synchronize
exception state.
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230803035231.429697-1-keithp@keithp.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
CHECK_NOT_DELAY_SLOT is correctly applied to the branch-related
instructions, but not to the PC-relative mov* instructions.
I verified the existence of an illegal slot exception on a SH7091 when
any of these instructions are attempted inside a delay slot.
This also matches the behavior described in the SH-4 ISA manual.
Signed-off-by: Zack Buhman <zack@buhman.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240407150705.5965-1-zack@buhman.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewd-by: Yoshinori Sato <ysato@users.sourceforge.jp>
The saturation arithmetic logic in helper_macw is not correct.
I tested and verified this behavior on a SH7091.
Reviewd-by: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Zack Buhman <zack@buhman.org>
Message-Id: <20240405233802.29128-3-zack@buhman.org>
[rth: Reformat helper_macw, add a test case.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
The saturation arithmetic logic in helper_macl is not correct.
I tested and verified this behavior on a SH7091.
Signed-off-by: Zack Buhman <zack@buhman.org>
Message-Id: <20240404162641.27528-2-zack@buhman.org>
[rth: Reformat helper_macl, add a test case.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Allow host access to the entire 64-bit accumulator.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The contents of IIAOQ depend on PSW_W.
Follow the text in "Interruption Instruction Address Queues",
pages 2-13 through 2-15.
Tested-by: Sven Schnelle <svens@stackframe.org>
Tested-by: Helge Deller <deller@gmx.de>
Reported-by: Sven Schnelle <svens@stackframe.org>
Fixes: b10700d826 ("target/hppa: Update IIAOQ, IIASQ for pa2.0")
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
When we do an AT address translation operation, the page table walk
is supposed to be performed in the context of the EL we're doing the
walk for, so for instance an AT S1E2R walk is done for EL2. In the
pseudocode an EL is passed to AArch64.AT(), which calls
SecurityStateAtEL() to find the security state that we should be
doing the walk with.
In ats_write64() we get this wrong, instead using the current
security space always. This is fine for AT operations performed from
EL1 and EL2, because there the current security state and the
security state for the lower EL are the same. But for AT operations
performed from EL3, the current security state is always either
Secure or Root, whereas we want to use the security state defined by
SCR_EL3.{NS,NSE} for the walk. This affects not just guests using
FEAT_RME but also ones where EL3 is Secure state and the EL3 code
is trying to do an AT for a NonSecure EL2 or EL1.
Use arm_security_space_below_el3() to get the SecuritySpace to
pass to do_ats_write() for all AT operations except the
AT S1E3* operations.
Cc: qemu-stable@nongnu.org
Fixes: e1ee56ec23 ("target/arm: Pass security space rather than flag for AT instructions")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2250
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240405180232.3570066-1-peter.maydell@linaro.org
EL2 accesses to CNTPOFF_EL2 should only ever trap to EL3 if EL3 is
present, as described by the reference manual (for MRS):
/* ... */
elsif PSTATE.EL == EL2 then
if Halted() && HaveEL(EL3) && /*...*/ then
UNDEFINED;
elsif HaveEL(EL3) && SCR_EL3.ECVEn == '0' then
/* ... */
else
X[t, 64] = CNTPOFF_EL2;
However, the existing implementation of gt_cntpoff_access() always
returns CP_ACCESS_TRAP_EL3 for EL2 accesses with SCR_EL3.ECVEn unset. In
pseudo-code terminology, this corresponds to assuming that HaveEL(EL3)
is always true, which is wrong. As a result, QEMU panics in
access_check_cp_reg() when started without EL3 and running EL2 code
accessing the register (e.g. any recent KVM booting a guest).
Therefore, add the HaveEL(EL3) check to gt_cntpoff_access().
Fixes: 2808d3b38a ("target/arm: Implement FEAT_ECV CNTPOFF_EL2 handling")
Signed-off-by: Pierre-Clément Tosi <ptosi@google.com>
Message-id: m3al6amhdkmsiy2f62w72ufth6dzn45xg5cz6xljceyibphnf4@ezmmpwk4tnhl
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
See previous commit and commit 9de9fa5cf2 ("Avoid using inlined
functions with external linkage") for rationale.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240313184954.42513-3-philmd@linaro.org>
Unify with other init_excp_FOO() in the same file.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Acked-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20240313213339.82071-5-philmd@linaro.org>
The HSTR_EL2 register allows the hypervisor to trap AArch32 EL1 and
EL0 accesses to cp15 registers. We incorrectly implemented this so
they trap to EL1 when we detect the need for a HSTR trap at code
generation time. (The check in access_check_cp_reg() which we do at
runtime to catch traps from EL0 is correctly routing them to EL2.)
Use the correct target EL when generating the code to take the trap.
Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2226
Fixes: 049edada5e ("target/arm: Make HSTR_EL2 traps take priority over UNDEF-at-EL1")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240325133116.2075362-1-peter.maydell@linaro.org
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEETkN92lZhb0MpsKeVZ7MCdqhiHK4FAmYJEQMACgkQZ7MCdqhi
HK6l0BAAkVf/BXKJxMu3jLvCpK/fBYGytvfHBR9PdWeBwIirqsk3L8eI/Fb5qkMZ
NMrfECyHR9LTcWb6/Pi/PGciNNWeyleN6IuVBeWfraIFyfHcxpwEKH8P+cXr5EWq
WDg+1GUt9+FHuAC9UdGZ81UzX7qeI9VfD3wHceqJ/XRU3qjj67DPZjTpsvxuP64+
N7MhdEM69F34uiIAn1aNCceXiS00dvtu6lDl3+18TzT8sNc6S3qdyxVcqfRhTJfY
FMZIN3j2hQrVOElEQE9vAOeJyjAQCM+U0y3XZIZHFUw/GTwKV0tm08RFnnxprteG
67vR5uXrDEELnU/1PA1YeyaBMA3Z3Nc36XbGf8zTD6rKkS2z0lWMcs72pPIxbMXj
c4FdnHaE+Q5ngy5s1p6bm5xM7WOEhrsJkgIu2N0weRroe0nAxywDWw3uQlMoV8Oc
Xet/xM2IKdc0PLzTvFO7xKnW3oqavJ4CX/6XgrGBoMDZKO1JRqaMixGtYKmoH/1h
96+jdRbPTZAY8aoiFWW7t065lvdWt74A6QITcn2Kqm04j3MGJfyWMU6dakBzwuri
PhOkf40o8qn8KN0JNfSO+IXhYVRRotLO/s9H7TEyQiXm25qrGMIF9FErnbDseZil
rGR4eL0lcwJboYH9RSRWg0NNqpUekvqBzdnS+G0Ad3J+qaMYoik=
=7UPB
-----END PGP SIGNATURE-----
Merge tag 'pull-ppc-for-9.0-3-20240331' of https://gitlab.com/npiggin/qemu into staging
Various fixes for recent regressions and new code.
# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCgAdFiEETkN92lZhb0MpsKeVZ7MCdqhiHK4FAmYJEQMACgkQZ7MCdqhi
# HK6l0BAAkVf/BXKJxMu3jLvCpK/fBYGytvfHBR9PdWeBwIirqsk3L8eI/Fb5qkMZ
# NMrfECyHR9LTcWb6/Pi/PGciNNWeyleN6IuVBeWfraIFyfHcxpwEKH8P+cXr5EWq
# WDg+1GUt9+FHuAC9UdGZ81UzX7qeI9VfD3wHceqJ/XRU3qjj67DPZjTpsvxuP64+
# N7MhdEM69F34uiIAn1aNCceXiS00dvtu6lDl3+18TzT8sNc6S3qdyxVcqfRhTJfY
# FMZIN3j2hQrVOElEQE9vAOeJyjAQCM+U0y3XZIZHFUw/GTwKV0tm08RFnnxprteG
# 67vR5uXrDEELnU/1PA1YeyaBMA3Z3Nc36XbGf8zTD6rKkS2z0lWMcs72pPIxbMXj
# c4FdnHaE+Q5ngy5s1p6bm5xM7WOEhrsJkgIu2N0weRroe0nAxywDWw3uQlMoV8Oc
# Xet/xM2IKdc0PLzTvFO7xKnW3oqavJ4CX/6XgrGBoMDZKO1JRqaMixGtYKmoH/1h
# 96+jdRbPTZAY8aoiFWW7t065lvdWt74A6QITcn2Kqm04j3MGJfyWMU6dakBzwuri
# PhOkf40o8qn8KN0JNfSO+IXhYVRRotLO/s9H7TEyQiXm25qrGMIF9FErnbDseZil
# rGR4eL0lcwJboYH9RSRWg0NNqpUekvqBzdnS+G0Ad3J+qaMYoik=
# =7UPB
# -----END PGP SIGNATURE-----
# gpg: Signature made Sun 31 Mar 2024 08:30:11 BST
# gpg: using RSA key 4E437DDA56616F4329B0A79567B30276A8621CAE
# gpg: Good signature from "Nicholas Piggin <npiggin@gmail.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 4E43 7DDA 5661 6F43 29B0 A795 67B3 0276 A862 1CAE
* tag 'pull-ppc-for-9.0-3-20240331' of https://gitlab.com/npiggin/qemu:
tests/avocado: ppc_hv_tests.py set alpine time before setup-alpine
tests/avocado: Fix ppc_hv_tests.py xorriso dependency guard
target/ppc: Do not clear MSR[ME] on MCE interrupts to supervisor
target/ppc: Fix GDB register indexing on secondary CPUs
target/ppc: Restore [H]DEXCR to 64-bits
target/ppc/mmu-radix64: Use correct string format in walk_tree()
hw/ppc/spapr: Include missing 'sysemu/tcg.h' header
spapr: nested: use bitwise NOT operator for flags check
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Hardware clears the MSR[ME] bit when delivering a machine check
interrupt, so that is what QEMU does.
The spapr environment runs in supervisor mode though, and receives
machine check interrupts after they are processed by the hypervisor,
and MSR[ME] must always be enabled in supervisor mode (otherwise it
could checkstop the system). So MSR[ME] must not be cleared when
delivering machine checks to the supervisor.
The fix to prevent supervisor mode from modifying MSR[ME] also
prevented it from re-enabling the incorrectly cleared MSR[ME] bit
when returning from handling the interrupt. Before that fix, the
problem was not very noticable with well-behaved code. So the
Fixes tag is not strictly correct, but practically they go together.
Found by kvm-unit-tests machine check tests (not yet upstream).
Fixes: 678b6f1af7 ("target/ppc: Prevent supervisor from modifying MSR[ME]")
Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
The GDB server protocol assigns an arbitrary numbering of the SPRs.
We track this correspondence on each SPR with gdb_id, using it to
resolve any SPR requests GDB makes.
Early on we generate an XML representation of the SPRs to give GDB,
including this numbering. However the XML is cached globally, and we
skip setting the SPR gdb_id values on subsequent threads if we detect
it is cached. This causes QEMU to fail to resolve SPR requests against
secondary CPUs because it cannot find the matching gdb_id value on that
thread's SPRs.
This is a minimal fix to first assign the gdb_id values, then return
early if the XML is cached. Otherwise we generate the XML using the
now already initialised gdb_id values.
Fixes: 1b53948ff8 ("target/ppc: Use GDBFeature for dynamic XML")
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Benjamin Gray <bgray@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
The DEXCR emulation was recently changed to a 32-bit register, possibly
because it does have a 32-bit read-only view. It is a full 64-bit
SPR though, so use the corresponding 64-bit write functions.
Fixes: fbda88f7ab ("target/ppc: Fix width of some 32-bit SPRs")
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Benjamin Gray <bgray@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
'mask', 'nlb' and 'base_addr' are all uin64_t types.
Use the corresponding PRIx64 format.
Fixes: d2066bc50d ("target/ppc: Check page dir/table base alignment")
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Along this path we have already skipped the insn to be
nullified, so the subsequent insn should be executed.
Cc: qemu-stable@nongnu.org
Reported-by: Sven Schnelle <svens@stackframe.org>
Tested-by: Sven Schnelle <svens@stackframe.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The 32-bit PA-7300LC (PCX-L2) CPU and the 64-bit PA8700 (PCX-W2) CPU
use different diag instructions to save or restore the CPU registers
to/from the shadow registers.
Implement those per-CPU architecture diag instructions to fix those
parts of the HP ODE testcases (L2DIAG and WDIAG, section 1) which test
the shadow registers.
Signed-off-by: Helge Deller <deller@gmx.de>
[rth: Use decodetree to distinguish cases]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Helge Deller <deller@gmx.de>
Tested-by: Helge Deller <deller@gmx.de>
This operation is trivial and does not require a helper.
Reviewed-by: Helge Deller <deller@gmx.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Overflow indicator should include the effect of the shift step.
We had previously left ??? comments about the issue.
Tested-by: Helge Deller <deller@gmx.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Prepare for proper indication of shladd unsigned overflow.
The UV indicator will be zero/not-zero instead of a single bit.
Tested-by: Helge Deller <deller@gmx.de>
Reviewed-by: Helge Deller <deller@gmx.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The cond_need_ext predicate was created while we still had a
32-bit compilation mode. It now makes more sense to treat D
as an absolute indicator of a 64-bit operation.
Tested-by: Helge Deller <deller@gmx.de>
Reviewed-by: Helge Deller <deller@gmx.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Split do_unit_cond to do_unit_zero_cond to only handle conditions
versus zero. These are the only ones that are legal for UXOR.
Simplify trans_uxor accordingly.
Rename do_unit to do_unit_addsub, since xor has been split.
Properly compute carry-out bits for add and subtract, mirroring
the code in do_add and do_sub.
Tested-by: Helge Deller <deller@gmx.de>
Reviewed-by: Helge Deller <deller@gmx.de>
Fixes: b2167459ae ("target-hppa: Implement basic arithmetic")
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
With r1 as zero is by far the most common usage of UADDCM, as the
easiest way to invert a register. The compiler does occasionally
use the addition step as well, and we can simplify that to avoid
a temp and write directly into the destination.
Tested-by: Helge Deller <deller@gmx.de>
Reviewed-by: Helge Deller <deller@gmx.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The carry bits for each nibble N are located in bit (N+1)*4,
so the shift by 3 was off by one. Furthermore, the carry bit
for the most significant carry bit is indeed located in bit 64,
which is located in a different storage word.
Use a double-word shift-right to reassemble into a single word
and place them all at bit 0 of their respective nibbles.
Tested-by: Helge Deller <deller@gmx.de>
Reviewed-by: Helge Deller <deller@gmx.de>
Fixes: b2167459ae ("target-hppa: Implement basic arithmetic")
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Move it to cpu.h, so it can also be used in hppa_form_gva_psw().
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Reviewed-by: Helge Deller <deller@gmx.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240324080945.991100-2-svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Call translator_io_start before write to EIRR.
Move evaluation of EIRR vs EIEM to hppa_cpu_exec_interrupt.
Exit TB after write to EIEM, but otherwise use a straight store.
Reviewed-by: Helge Deller <deller@gmx.de>
Tested-by: Helge Deller <deller@gmx.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The call to gen_helper_read_interval_timer is
identical on both sides of the IF.
Reviewed-by: Helge Deller <deller@gmx.de>
Tested-by: Helge Deller <deller@gmx.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Wide mode provides two more conditions, add them.
Fixes: 59963d8fdf ("target/hppa: Pass d to do_unit_cond")
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240321184228.611897-1-svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Do not clobber the high bits of the address by using a 32-bit deposit.
Reviewed-by: Helge Deller <deller@gmx.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The return address comes from IA*Q_Next, and IASQ_Next
is always equal to IASQ_Back, not IASQ_Front.
Tested-by: Helge Deller <deller@gmx.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
'address' got converted from target_ulong to vaddr in commit
68d6eee73c ("target/tricore: Convert to CPUClass::tlb_fill").
Use the corresponding format string to avoid casting.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20240319051413.6956-1-philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
CXL emulation of interleave requires read and write hooks due to
requirement for subpage granularity. The Linux kernel stack now enables
using this memory as conventional memory in a separate NUMA node. If a
process is deliberately forced to run from that node
$ numactl --membind=1 ls
the page table walk on i386 fails.
Useful part of backtrace:
(cpu=cpu@entry=0x555556fd9000, fmt=fmt@entry=0x555555fe3378 "cpu_io_recompile: could not find TB for pc=%p")
at ../../cpu-target.c:359
(retaddr=0, addr=19595792376, attrs=..., xlat=<optimized out>, cpu=0x555556fd9000, out_offset=<synthetic pointer>)
at ../../accel/tcg/cputlb.c:1339
(cpu=0x555556fd9000, full=0x7fffee0d96e0, ret_be=ret_be@entry=0, addr=19595792376, size=size@entry=8, mmu_idx=4, type=MMU_DATA_LOAD, ra=0) at ../../accel/tcg/cputlb.c:2030
(cpu=cpu@entry=0x555556fd9000, p=p@entry=0x7ffff56fddc0, mmu_idx=<optimized out>, type=type@entry=MMU_DATA_LOAD, memop=<optimized out>, ra=ra@entry=0) at ../../accel/tcg/cputlb.c:2356
(cpu=cpu@entry=0x555556fd9000, addr=addr@entry=19595792376, oi=oi@entry=52, ra=ra@entry=0, access_type=access_type@entry=MMU_DATA_LOAD) at ../../accel/tcg/cputlb.c:2439
at ../../accel/tcg/ldst_common.c.inc:301
at ../../target/i386/tcg/sysemu/excp_helper.c:173
(err=0x7ffff56fdf80, out=0x7ffff56fdf70, mmu_idx=0, access_type=MMU_INST_FETCH, addr=18446744072116178925, env=0x555556fdb7c0)
at ../../target/i386/tcg/sysemu/excp_helper.c:578
(cs=0x555556fd9000, addr=18446744072116178925, size=<optimized out>, access_type=MMU_INST_FETCH, mmu_idx=0, probe=<optimized out>, retaddr=0) at ../../target/i386/tcg/sysemu/excp_helper.c:604
Avoid this by plumbing the address all the way down from
x86_cpu_tlb_fill() where is available as retaddr to the actual accessors
which provide it to probe_access_full() which already handles MMIO accesses.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2180
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2220
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Gregory Price <gregory.price@memverge.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Message-ID: <20240307155304.31241-2-Jonathan.Cameron@huawei.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Otherwise TCG would assume the register that holds t1 would be constant
and reuse whenever it needs the value within it.
Cc: qemu-stable@nongnu.org
Fixes: f1ea739bd5 ("target/s390x: Use tcg_constant_* in local contexts")
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
[iii: Adjust a newline and capitalization, add tags]
Signed-off-by: Ido Plat <ido.plat@ibm.com>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-ID: <20240318202722.20675-1-iii@linux.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
-----BEGIN PGP SIGNATURE-----
iLMEAAEKAB0WIQS4/x2g0v3LLaCcbCxAov/yOSY+3wUCZf1WZgAKCRBAov/yOSY+
35zZBADDPLM3130Q/2zsGhol1C538i4+hYRbrX+OsLnlaldyE3NqCPcgaKwVE3xS
T9aOln91rDyQedz4DVYYSx+Oa1JpRjGko957REmopL50SJOYi6n7YhHJksaUirjJ
tMDZdPClOegieOpCu8LgJAVhaxTpZvfLedJVPt7O6Fl/uP3pLg==
=XLqh
-----END PGP SIGNATURE-----
Merge tag 'pull-loongarch-20240322' of https://gitlab.com/gaosong/qemu into staging
pull-loongarch-20240322
# -----BEGIN PGP SIGNATURE-----
#
# iLMEAAEKAB0WIQS4/x2g0v3LLaCcbCxAov/yOSY+3wUCZf1WZgAKCRBAov/yOSY+
# 35zZBADDPLM3130Q/2zsGhol1C538i4+hYRbrX+OsLnlaldyE3NqCPcgaKwVE3xS
# T9aOln91rDyQedz4DVYYSx+Oa1JpRjGko957REmopL50SJOYi6n7YhHJksaUirjJ
# tMDZdPClOegieOpCu8LgJAVhaxTpZvfLedJVPt7O6Fl/uP3pLg==
# =XLqh
# -----END PGP SIGNATURE-----
# gpg: Signature made Fri 22 Mar 2024 09:59:02 GMT
# gpg: using RSA key B8FF1DA0D2FDCB2DA09C6C2C40A2FFF239263EDF
# gpg: Good signature from "Song Gao <m17746591750@163.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: B8FF 1DA0 D2FD CB2D A09C 6C2C 40A2 FFF2 3926 3EDF
* tag 'pull-loongarch-20240322' of https://gitlab.com/gaosong/qemu:
target/loongarch: Fix qemu-system-loongarch64 assert failed with the option '-d int'
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
qemu-system-loongarch64 assert failed with the option '-d int',
the helper_idle() raise an exception EXCP_HLT, but the exception name is undefined.
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20240321123606.1704900-1-gaosong@loongson.cn>
The timebase-frequency of guest OS should be the same with host
machine. The timebase-frequency value in DTS should be got from
hypervisor when using KVM acceleration.
Signed-off-by: Yong-Xuan Wang <yongxuan.wang@sifive.com>
Message-ID: <20240314061510.9800-1-yongxuan.wang@sifive.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Need to convert mmu_idx to privilege mode for PMP function.
Signed-off-by: Irina Ryapolova <irina.ryapolova@syntacore.com>
Fixes: b297129ae1 ("target/riscv: propagate PMP permission to TLB page")
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20240320172828.23965-1-irina.ryapolova@syntacore.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
According to the Zvfbfmin definition in the RISC-V BF16 extensions spec,
the Zvfbfmin extension only requires either the V extension or the
Zve32f extension.
Signed-off-by: Max Chou <max.chou@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240321170929.1162507-1-max.chou@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Change the for loops in ldst helpers to do a single increment in the
counter, and assign it env->vstart, to avoid re-reading from vstart
every time.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20240314175704.478276-11-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The vstart_eq_zero flag is updated at the beginning of the translation
phase from the env->vstart variable. During the execution phase all
functions will set env->vstart = 0 after a successful execution, but the
vstart_eq_zero flag remains the same as at the start of the block. This
will wrongly cause SIGILLs in translations that requires env->vstart = 0
and might be reading vstart_eq_zero = false.
This patch adds a new finalize_rvv_inst() helper that is called at the
end of each vector instruction that will both update vstart_eq_zero and
do a mark_vs_dirty().
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1976
Signed-off-by: Ivan Klokov <ivan.klokov@syntacore.com>
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240314175704.478276-10-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
trans_vmv_v_i , trans_vfmv_v_f and the trans_##NAME macro from
GEN_VMV_WHOLE_TRANS() are calling mark_vs_dirty() in both branches of
their 'ifs'. conditionals.
Call it just once in the end like other functions are doing.
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20240314175704.478276-9-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
All helpers that rely on vstart >= vl are now doing early exits using
the VSTART_CHECK_EARLY_EXIT() macro. This macro will not only exit the
helper but also clear vstart.
We're still left with brconds that are skipping the helper, which is the
only place where we're clearing vstart. The pattern goes like this:
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
(... calls helper that clears vstart ...)
gen_set_label(over);
return true;
This means that every time we jump to 'over' we're not clearing vstart,
which is an oversight that we're doing across the board.
Instead of setting vstart = 0 manually after each 'over' jump, remove
those brconds that are skipping helpers. The exception will be
trans_vmv_s_x() and trans_vfmv_s_f(): they don't use a helper and are
already clearing vstart manually in the 'over' label.
While we're at it, remove the (vl == 0) brconds from trans_rvbf16.c.inc
too since they're unneeded.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240314175704.478276-8-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
We're going to make changes that will required each helper to be
responsible for the 'vstart' management, i.e. we will relieve the
'vstart < vl' assumption that helpers have today.
Helpers are usually able to deal with vstart >= vl, i.e. doing nothing
aside from setting vstart = 0 at the end, but the tail update functions
will update the tail regardless of vstart being valid or not. Unifying
the tail update process in a single function that would handle the
vstart >= vl case isn't trivial (see [1] for more info).
This patch takes a blunt approach: do an early exit in every single
vector helper if vstart >= vl, unless the helper is guarded with
vstart_eq_zero in the translation. For those cases the helper is ready
to deal with cases where vl might be zero, i.e. throwing exceptions
based on it like vcpop_m() and first_m().
Helpers that weren't changed:
- vcpop_m(), vfirst_m(), vmsetm(), GEN_VEXT_VIOTA_M(): these are guarded
directly with vstart_eq_zero;
- GEN_VEXT_VCOMPRESS_VM(): guarded with vcompress_vm_check() that checks
vstart_eq_zero;
- GEN_VEXT_RED(): guarded with either reduction_check() or
reduction_widen_check(), both check vstart_eq_zero;
- GEN_VEXT_FRED(): guarded with either freduction_check() or
freduction_widen_check(), both check vstart_eq_zero.
Another exception is vext_ldst_whole(), who operates on effective vector
length regardless of the current settings in vtype and vl.
[1] https://lore.kernel.org/qemu-riscv/1590234b-0291-432a-a0fa-c5a6876097bc@linux.alibaba.com/
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240314175704.478276-7-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Commit 8ff8ac6329 added a conditional to guard the vext_ldst_whole()
helper if vstart >= evl. But by skipping the helper we're also not
setting vstart = 0 at the end of the insns, which is incorrect.
We'll move the conditional to vext_ldst_whole(), following in line with
the removal of all brconds vstart >= vl that the next patch will do. The
idea is to make the helpers responsible for their own vstart management.
Fix ldst_whole isns by:
- remove the brcond that skips the helper if vstart is >= evl;
- vext_ldst_whole() now does an early exit with the same check, where
evl = (vlenb * nf) >> log2_esz, but the early exit will also clear
vstart.
The 'width' param is now unneeded in ldst_whole_trans() and is also
removed. It was used for the evl calculation for the brcond and has no
other use now. The 'width' is reflected in vext_ldst_whole() via
log2_esz, which is encoded by GEN_VEXT_LD_WHOLE() as
"ctzl(sizeof(ETYPE))".
Suggested-by: Max Chou <max.chou@sifive.com>
Fixes: 8ff8ac6329 ("target/riscv: rvv: Add missing early exit condition for whole register load/store")
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Max Chou <max.chou@sifive.com>
Message-ID: <20240314175704.478276-6-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
These insns have 2 paths: we'll either have vstart already cleared if
vstart_eq_zero or we'll do a brcond to check if vstart >= maxsz to call
the 'vmvr_v' helper. The helper will clear vstart if it executes until
the end, or if vstart >= vl.
For starters, the check itself is wrong: we're checking vstart >= maxsz,
when in fact we should use vstart in bytes, or 'startb' like 'vmvr_v' is
calling, to do the comparison. But even after fixing the comparison we'll
still need to clear vstart in the end, which isn't happening too.
We want to make the helpers responsible to manage vstart, including
these corner cases, precisely to avoid these situations:
- remove the wrong vstart >= maxsz cond from the translation;
- add a 'startb >= maxsz' cond in 'vmvr_v', and clear vstart if that
happens.
This way we're now sure that vstart is being cleared in the end of the
execution, regardless of the path taken.
Fixes: f714361ed7 ("target/riscv: rvv-1.0: implement vstart CSR")
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20240314175704.478276-5-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
vmvr_v isn't handling the case where the host might be big endian and
the bytes to be copied aren't sequential.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Fixes: f714361ed7 ("target/riscv: rvv-1.0: implement vstart CSR")
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20240314175704.478276-4-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
trans_vmv_x_s, trans_vmv_s_x, trans_vfmv_f_s and trans_vfmv_s_f aren't
setting vstart = 0 after execution. This is usually done by a helper in
vector_helper.c but these functions don't use helpers.
We'll set vstart after any potential 'over' brconds, and that will also
mandate a mark_vs_dirty() too.
Fixes: dedc53cbc9 ("target/riscv: rvv-1.0: integer scalar move instructions")
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240314175704.478276-3-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The helper isn't setting env->vstart = 0 after its execution, as it is
expected from every vector instruction that completes successfully.
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Message-ID: <20240314175704.478276-2-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Commit 3b8022269c added the capability of named features/profile
extensions to be added in riscv,isa. To do that we had to assign priv
versions for each one of them in isa_edata_arr[]. But this resulted in a
side-effect: vendor CPUs that aren't running priv_version_latest started
to experience warnings for these profile extensions [1]:
| $ qemu-system-riscv32 -M sifive_e
| qemu-system-riscv32: warning: disabling zic64b extension for hart
0x00000000 because privilege spec version does not match
| qemu-system-riscv32: warning: disabling ziccamoa extension for
hart 0x00000000 because privilege spec version does not match
This is benign as far as the CPU behavior is concerned since disabling
both extensions is a no-op (aside from riscv,isa). But the warnings are
unpleasant to deal with, especially because we're sending user warnings
for extensions that users can't enable/disable.
Instead of enabling all named features all the time, separate them by
priv version. During finalize() time, after we decided which
priv_version the CPU is running, enable/disable all the named extensions
based on the priv spec chosen. This will be enough for a bug fix, but as
a future work we should look into how we can name these extensions in a
way that we don't need an explicit ext_name => priv_ver as we're doing
here.
The named extensions being added in isa_edata_arr[] that will be
enabled/disabled based solely on priv version can be removed from
riscv_cpu_named_features[]. 'zic64b' is an extension that can be
disabled based on block sizes so it'll retain its own flag and entry.
[1] https://lists.gnu.org/archive/html/qemu-devel/2024-03/msg02592.html
Reported-by: Clément Chigot <chigot@adacore.com>
Fixes: 3b8022269c ("target/riscv: add riscv,isa to named features")
Suggested-by: Andrew Jones <ajones@ventanamicro.com>
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Tested-by: Clément Chigot <chigot@adacore.com>
Message-ID: <20240312203214.350980-1-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
monitor_puts() doesn't check the monitor pointer, but do_inject_x86_mce()
may have a parameter with NULL monitor pointer. Revert monitor_puts() in
do_inject_x86_mce() to fix, then the fact that we send the same message to
monitor and log is again more obvious.
Fixes: bf0c50d4aa (monitor: expose monitor_puts to rest of code)
Reviwed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Tao Su <tao1.su@linux.intel.com>
Message-ID: <20240320083640.523287-1-tao1.su@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
On gen_ll, if a->imm is zero, make_address_x return src1,
but the load to destination may clobber src1. We use a new
destination to fix this problem.
Fixes: c5af6628f4 (target/loongarch: Extract make_address_i() helper)
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20240320013955.1561311-1-gaosong@loongson.cn>
When we use qemu tcg simulation, the page size of bios is 4KB.
When using the level 2 super huge page (page size is 1G) to create the page table,
it is found that the content of the corresponding address space is abnormal,
resulting in the bios can not start the operating system and graphical interface normally.
The lddir and ldpte instruction emulation has
a problem with the use of super huge page processing above level 2.
The page size is not correctly calculated,
resulting in the wrong page size of the table entry found by tlb.
Signed-off-by: Xianglai Li <lixianglai@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20240318070332.1273939-1-lixianglai@loongson.cn>
stdby,e,m was writing data from the wrong half of the register
into memory for cases 0-3.
Fixes: 25460fc5a7 ("target/hppa: Implement STDBY")
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240319161921.487080-7-svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
mfia should return only the iaoq bits without privilege
bits.
Fixes: 98a9cb792c ("target-hppa: Implement system and memory-management insns")
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Helge Deller <deller@gmx.de>
Message-Id: <20240319161921.487080-6-svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
When the guest modifies the tb it is currently executing from,
it executes a fic instruction. Exit the tb on such instruction,
otherwise we might execute stale code.
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20240319161921.487080-5-svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>