This patch allows for easier manipulation of the cache description
register, CCSIDR. Which is helpful for testing as well. Currently,
numbers get hard-coded and might be prone to errors.
Therefore, this patch adds a wrapper for different types of CPUs
available in tcg to decribe caches. One function `make_ccsidr` supports
two cases by carrying a parameter as FORMAT that can be LEGACY and
CCIDX which determines the specification of the register.
For CCSIDR register, 32 bit version follows specification [1].
Conversely, 64 bit version follows specification [2].
[1] B4.1.19, ARM Architecture Reference Manual ARMv7-A and ARMv7-R
edition, https://developer.arm.com/documentation/ddi0406
[2] D23.2.29, ARM Architecture Reference Manual for A-profile Architecture,
https://developer.arm.com/documentation/ddi0487/latest/
Signed-off-by: Alireza Sanaee <alireza.sanaee@huawei.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20240903144550.280-1-alireza.sanaee@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This patch's main focus is to use the previously added
hvf_get_physical_address_range to inform VM creation
about the IPA size we need for the VM, so we can extend
the default 36b IPA size and support VMs with 64+GB of
RAM. This is done by freezing the memory map, computing
the highest GPA and then (depending on if the platform
supports an IPA size that large) telling the kernel to
use a size >= for the VM. In pursuit of this a couple of
things related to how we handle the physical address range
we expose to guests were altered, but for an explanation of
what we were doing:
Today, to get the IPA size we were reading id_aa64mmfr0_el1's
PARange field from a newly made vcpu. Unfortunately, HVF just
returns the hosts PARange directly for the initial value and
not the IPA size that will actually back the VM, so we believe
we have much more address space than we actually do today it seems.
Starting in macOS 13.0 some APIs were introduced to be able to
query the maximum IPA size the kernel supports, and to set the IPA
size for a given VM. However, this still has a couple of issues
on < macOS 15. Up until macOS 15 (and if the hardware supported
it) the max IPA size was 39 bits which is not a valid PARange
value, so we can't clamp down what we advertise in the vcpu's
id_aa64mmfr0_el1 to our IPA size. Starting in macOS 15 however,
the maximum IPA size is 40 bits (if it's supported in the hardware
as well) which is also a valid PARange value so we can set our IPA
size to the maximum as well as clamp down the PARange we advertise
to the guest. This allows VMs with 64+ GB of RAM and should fix the
oddness of the PARange situation as well.
Signed-off-by: Danny Canter <danny_canter@apple.com>
Message-id: 20240828111552.93482-4-danny_canter@apple.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is preliminary work to split up hv_vm_create
logic per platform so we can support creating VMs
with > 64GB of RAM on Apple Silicon machines. This
is done via ARM HVF's hv_vm_config_create() (and
other APIs that modify this config that will be
coming in future patches). This should have no
behavioral difference at all as hv_vm_config_create()
just assigns the same default values as if you just
passed NULL to the function.
Signed-off-by: Danny Canter <danny_canter@apple.com>
Message-id: 20240828111552.93482-3-danny_canter@apple.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Change the data type of the ioctl _request_ argument from 'int' to
'unsigned long' for the various accel/kvm functions which are
essentially wrappers around the ioctl() syscall.
The correct type for ioctl()'s 'request' argument is confused:
* POSIX defines the request argument as 'int'
* glibc uses 'unsigned long' in the prototype in sys/ioctl.h
* the glibc info documentation uses 'int'
* the Linux manpage uses 'unsigned long'
* the Linux implementation of the syscall uses 'unsigned int'
If we wrap ioctl() with another function which uses 'int' as the
type for the request argument, then requests with the 0x8000_0000
bit set will be sign-extended when the 'int' is cast to
'unsigned long' for the call to ioctl().
On x86_64 one such example is the KVM_IRQ_LINE_STATUS request.
Bit requests with the _IOC_READ direction bit set, will have the high
bit set.
Fortunately the Linux Kernel truncates the upper 32bit of the request
on 64bit machines (because it uses 'unsigned int', and see also Linus
Torvalds' comments in
https://sourceware.org/bugzilla/show_bug.cgi?id=14362 )
so this doesn't cause active problems for us. However it is more
consistent to follow the glibc ioctl() prototype when we define
functions that are essentially wrappers around ioctl().
This resolves a Coverity issue where it points out that in
kvm_get_xsave() we assign a value (KVM_GET_XSAVE or KVM_GET_XSAVE2)
to an 'int' variable which can't hold it without overflow.
Resolves: Coverity CID 1547759
Signed-off-by: Johannes Stoelp <johannes.stoelp@gmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20240815122747.3053871-1-peter.maydell@linaro.org
[PMM: Rebased patch, adjusted commit message, included note about
Coverity fix, updated the type of the local var in kvm_get_xsave,
updated the comment in the KVMState struct definition]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The Alpha and HPPA CPU class structs include a 'parent_reset'
field which is never used; delete them.
(These targets don't seem to implement reset at all; if they did they
should do it using the three-phase reset mechanism, which uses a
'ResettablePhases parent_phases' field instead of the old
'DeviceReset parent_reset' field.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20240830145812.1967042-6-peter.maydell@linaro.org
Convert the s390 CPU to the Resettable interface. This is slightly
more involved than the other CPU types were (see commits
9130cade5fc22..d66e64dd006df) because S390 has its own set of
different kinds of reset with different behaviours that it needs to
trigger.
We handle this by adding these reset types to the Resettable
ResetType enum. Now instead of having an underlying implementation
of reset that is s390-specific and which might be called either
directly or via the DeviceClass::reset method, we can implement only
the Resettable hold phase method, and have the places that need to
trigger an s390-specific reset type do so by calling
resettable_reset().
The other option would have been to smuggle in the s390 reset
type via, for instance, a field in the CPU state that we set
in s390_do_cpu_initial_reset() etc and then examined in the
reset method, but doing it this way seems cleaner.
The motivation for this change is that this is the last caller
of the legacy device_class_set_parent_reset() function, and
removing that will let us clean up some glue code that we added
for the transition to three-phase reset.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Nina Schoetterl-Glausch <nsg@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Thomas Huth <thuth@redhat.com>
Message-id: 20240830145812.1967042-4-peter.maydell@linaro.org
-----BEGIN PGP SIGNATURE-----
iLMEAAEKAB0WIQS4/x2g0v3LLaCcbCxAov/yOSY+3wUCZuLmLgAKCRBAov/yOSY+
38JNA/9UdorT4a7H+H5PhNeEu2EHDgMPb7+gxyYKw03mOG2MB3KFzkK0LRQShaPt
ADJmIqAFlc9SJLkbo6ELMDl+ZnUU9OdC/P6YU5iBG71zx1PonMwuyJTWhlBwxWcG
+OB8aDBUALoe/Gb4za152I84cR08g58TgLnXNfEkCM8lnPfAug==
=Plwu
-----END PGP SIGNATURE-----
Merge tag 'pull-loongarch-20240912' of https://gitlab.com/gaosong/qemu into staging
pull-loongarch-20240912
# -----BEGIN PGP SIGNATURE-----
#
# iLMEAAEKAB0WIQS4/x2g0v3LLaCcbCxAov/yOSY+3wUCZuLmLgAKCRBAov/yOSY+
# 38JNA/9UdorT4a7H+H5PhNeEu2EHDgMPb7+gxyYKw03mOG2MB3KFzkK0LRQShaPt
# ADJmIqAFlc9SJLkbo6ELMDl+ZnUU9OdC/P6YU5iBG71zx1PonMwuyJTWhlBwxWcG
# +OB8aDBUALoe/Gb4za152I84cR08g58TgLnXNfEkCM8lnPfAug==
# =Plwu
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 12 Sep 2024 14:01:34 BST
# gpg: using RSA key B8FF1DA0D2FDCB2DA09C6C2C40A2FFF239263EDF
# gpg: Good signature from "Song Gao <m17746591750@163.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: B8FF 1DA0 D2FD CB2D A09C 6C2C 40A2 FFF2 3926 3EDF
* tag 'pull-loongarch-20240912' of https://gitlab.com/gaosong/qemu:
hw/loongarch: Add acpi SPCR table support
hw/loongarch: virt: pass random seed to fdt
hw/loongarch: virt: support up to 4 serial ports
target/loongarch: Support QMP dump-guest-memory
target/loongarch/kvm: Add vCPU reset function
hw/loongarch: Remove default enable with VIRTIO_VGA device
target/loongarch: Add compatible support about VM reboot
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add the support needed for creating prstatus elf notes. This allows
us to use QMP dump-guest-memory.
Now ELF notes of LoongArch only supports general elf notes, LSX and
LASX is not supported, since it is mainly used to dump guest memory.
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Reviewed-by: Song Gao <gaosong@loongson.cn>
Tested-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20240822065245.2286214-1-maobibo@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
KVM provides interface KVM_REG_LOONGARCH_VCPU_RESET to reset vCPU,
it can be used to clear internal state about kvm kernel. vCPU reset
function is added here for kvm mode.
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Reviewed-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20240822022827.2273534-1-maobibo@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
With edk2-stable202408 LoongArch UEFI bios, CSR PGD register is set only
if its value is equal to zero for boot cpu, it causes reboot issue. Since
CSR PGD register is changed with linux kernel, UEFI BIOS cannot use it.
Add workaround to clear CSR registers relative with TLB in function
loongarch_cpu_reset_hold(), so that VM can reboot with edk2-stable202408
UEFI bios.
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Reviewed-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20240827035807.3326293-1-maobibo@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Model fp_exception state, in which only fp stores are allowed
until such time as the FQ has been flushed.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Carl Hauser <chauser@pullman.com>
Invalid encoding of addr should raise TT_ILL_INSN, so
check before supervisor, which might raise TT_PRIV_INSN.
Clear QNE after execution.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2340
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Carl Hauser <chauser@pullman.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Carl Hauser <chauser@pullman.com>
Implement a single instruction floating point queue,
populated while delivering an fp exception.
Signed-off-by: Carl Hauser <chauser@pullman.com>
[rth: Split from a larger patch]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Carl Hauser <chauser@pullman.com>
Add support for, and migrate, a single-entry fp
instruction queue for sparc32.
Signed-off-by: Carl Hauser <chauser@pullman.com>
[rth: Split from a larger patch;
adjust representation with union;
add migration state]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Carl Hauser <chauser@pullman.com>
- remove docker-armel-cross
- update i686 and mipsel images to bookworm
- use docker-all-test-cross for mips64le tests
- fix duplicated line in docs
- update gitlab-runner ansible script
- support MTE in gdbstub for system mode
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEEZoWumedRZ7yvyN81+9DbCVqeKkQFAmbgye8ACgkQ+9DbCVqe
KkTesQf/WSTYAelzJWlEo0EPg5agokephfza4vdmweDujOT8MYPF9qxfsxoiTVA8
GTtTOod9iqmY/4/EPKIqUtZH38oaX5h9on2FhSssOMy+N4lUADJ+CcHHMSj4BuUt
jTXDSa9e5aj0m/yqg2PjF8U12Sygf7dKJturGLOWoWR5qa3xpQ2a6c3CkfxO3RQK
yTBfIZk47iOrVvEX8chsRzpkpiXY6/S5hkZZwcqbXcUMKH2s0po9Yg031vE3yN+g
kxJ7/mFNl49E/fqYdRahhyBDORlltCglCHsacxxa/4a216wOsNKyV3QLCJMjq8yO
3/SPu0p+UouSFcASwTUt5XIo0G0TcA==
=7W1s
-----END PGP SIGNATURE-----
Merge tag 'pull-testing-gdbstub-oct-100924-1' of https://gitlab.com/stsquad/qemu into staging
testing and gdbstub updates:
- remove docker-armel-cross
- update i686 and mipsel images to bookworm
- use docker-all-test-cross for mips64le tests
- fix duplicated line in docs
- update gitlab-runner ansible script
- support MTE in gdbstub for system mode
# -----BEGIN PGP SIGNATURE-----
#
# iQEzBAABCgAdFiEEZoWumedRZ7yvyN81+9DbCVqeKkQFAmbgye8ACgkQ+9DbCVqe
# KkTesQf/WSTYAelzJWlEo0EPg5agokephfza4vdmweDujOT8MYPF9qxfsxoiTVA8
# GTtTOod9iqmY/4/EPKIqUtZH38oaX5h9on2FhSssOMy+N4lUADJ+CcHHMSj4BuUt
# jTXDSa9e5aj0m/yqg2PjF8U12Sygf7dKJturGLOWoWR5qa3xpQ2a6c3CkfxO3RQK
# yTBfIZk47iOrVvEX8chsRzpkpiXY6/S5hkZZwcqbXcUMKH2s0po9Yg031vE3yN+g
# kxJ7/mFNl49E/fqYdRahhyBDORlltCglCHsacxxa/4a216wOsNKyV3QLCJMjq8yO
# 3/SPu0p+UouSFcASwTUt5XIo0G0TcA==
# =7W1s
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 10 Sep 2024 23:36:31 BST
# gpg: using RSA key 6685AE99E75167BCAFC8DF35FBD0DB095A9E2A44
# gpg: Good signature from "Alex Bennée (Master Work Key) <alex.bennee@linaro.org>" [full]
# Primary key fingerprint: 6685 AE99 E751 67BC AFC8 DF35 FBD0 DB09 5A9E 2A44
* tag 'pull-testing-gdbstub-oct-100924-1' of https://gitlab.com/stsquad/qemu:
tests/tcg/aarch64: Extend MTE gdbstub tests to system mode
tests/tcg/aarch64: Improve linker script organization
tests/guest-debug: Support passing arguments to the GDB test script
gdbstub: Add support for MTE in system mode
gdbstub: Use specific MMU index when probing MTE addresses
scripts/ci: update the gitlab-runner playbook
docs/devel: fix duplicate line
tests/docker: use debian-all-test-cross for mips64el tests
tests/docker: update debian i686 and mipsel images to bookworm
tests/docker: remove debian-armel-cross
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This commit makes handle_q_memtag, handle_q_isaddresstagged, and
handle_Q_memtag stubs build for system mode, allowing all GDB
'memory-tag' subcommands to work with QEMU gdbstub on aarch64 system
mode.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/620
Signed-off-by: Gustavo Romero <gustavo.romero@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240906143316.657436-3-gustavo.romero@linaro.org>
[AJB: add #ifdef CONFIG_TCG guards]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20240910173900.4154726-8-alex.bennee@linaro.org>
Use cpu_mmu_index() to determine the specific translation regime (MMU
index) before probing addresses using allocation_tag_mem_probe().
Currently, the MMU index is hardcoded to 0 and only works for user mode.
By obtaining the specific MMU index according to the translation regime,
future use of the stubs relying on allocation_tag_mem_probe in other
regimes will be possible, like in EL1.
This commit also changes the ptr_size value passed to
allocation_tag_mem_probe() from 8 to 1. The ptr_size parameter actually
represents the number of bytes in the memory access (which can be as
small as 1 byte), rather than the number of bits used in the address
space pointed to by ptr.
Signed-off-by: Gustavo Romero <gustavo.romero@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240906143316.657436-2-gustavo.romero@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20240910173900.4154726-7-alex.bennee@linaro.org>
QAPI's 'prefix' feature can make the connection between enumeration
type and its constants less than obvious. It's best used with
restraint.
QCryptoHashAlgorithm has a 'prefix' that overrides the generated
enumeration constants' prefix to QCRYPTO_HASH_ALG.
We could simply drop 'prefix', but then the prefix becomes
QCRYPTO_HASH_ALGORITHM, which is rather long.
We could additionally rename the type to QCryptoHashAlg, but I think
the abbreviation "alg" is less than clear.
Rename the type to QCryptoHashAlgo instead. The prefix becomes to
QCRYPTO_HASH_ALGO.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Acked-by: Daniel P. Berrangé <berrange@redhat.com>
Message-ID: <20240904111836.3273842-12-armbru@redhat.com>
[Conflicts with merge commit 7bbadc60b5 resolved]
QAPI's 'prefix' feature can make the connection between enumeration
type and its constants less than obvious. It's best used with
restraint.
CpuS390Entitlement has a 'prefix' to change the generated enumeration
constants' prefix from CPU_S390_ENTITLEMENT to S390_CPU_ENTITLEMENT.
Rename the type to S390CpuEntitlement, so that 'prefix' is not needed.
Likewise change CpuS390Polarization to S390CpuPolarization, and
CpuS390State to S390CpuState.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Acked-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20240904111836.3273842-10-armbru@redhat.com>
- Steve's cleanup of unused variable
- Peter Maydell's fixes for several leaks in migration-test
- Fabiano's flexibilization of multifd data structures for device
state migration
- Arman Nabiev's fix for ppc e500 migration
- Thomas' fix for migration-test vs. --without-default-devices
-----BEGIN PGP SIGNATURE-----
iQJEBAABCAAuFiEEqhtIsKIjJqWkw2TPx5jcdBvsMZ0FAmbYVXwQHGZhcm9zYXNA
c3VzZS5kZQAKCRDHmNx0G+wxnRucEAC1vo046UGdUmbb4PaF5vKAg97io6RB2nrH
HMz56Yc0AcAKRUGwe2Z80e2jY8B6zi8Ha8b9l7cVsej095eGCF+tINIL4wRX4lHm
alDY/LkhuqjE5g5c/DaeTztyBOFLvdWHPU5eJyDOC9r7kSlnUcL1gAslH23b8uL0
xvhPVKaTWjGIzNL1q/XfBr1WgRGqfD6dYb32HJDTq85yOnUT5sEr55aoEEu0euKh
MYbXPmi5AMbrp8nP21kzUopX8iYERRdoKwhF0ZssciGi/qJVevH70tNdbDEQSxyp
+vtP54TnL3LrzD4uY5Snng9zT9h0QrZujY79OEcxu20U0s29OQaudWkIjp7yLLUv
UnPZHS+bIyaS53DdpV94GKGGBX1wrjGC/sn8eGYzmb2yMlMjLTBoE8L5r9cadshX
XTeF4MtKGqaS3xDM2fIgACHHFl6qr/l0nENspv0raFzpf9Jx/WbpekghvTuWN6/B
pZHnoOTNiAqXS/Rnyy829vsQ0Pw4hi6wx79Z73RP+35ubZTgTmOsQx9f2FjuEh6k
JS+q9k4VJ+nntUWsYn4GS1Jlt+FXJ2hfzNj1NNFN4xLT1oioc6pCHsQyV7SBArB1
ml2zYyfKCTC3riIRhcv/ew6OcKbhHcPFOpd/v0y40LO3mx8S0LZnUWXkcrl3XIZS
Mj5CBdlFgA==
=SRN4
-----END PGP SIGNATURE-----
Merge tag 'migration-20240904-pull-request' of https://gitlab.com/farosas/qemu into staging
Migration pull request
- Steve's cleanup of unused variable
- Peter Maydell's fixes for several leaks in migration-test
- Fabiano's flexibilization of multifd data structures for device
state migration
- Arman Nabiev's fix for ppc e500 migration
- Thomas' fix for migration-test vs. --without-default-devices
# -----BEGIN PGP SIGNATURE-----
#
# iQJEBAABCAAuFiEEqhtIsKIjJqWkw2TPx5jcdBvsMZ0FAmbYVXwQHGZhcm9zYXNA
# c3VzZS5kZQAKCRDHmNx0G+wxnRucEAC1vo046UGdUmbb4PaF5vKAg97io6RB2nrH
# HMz56Yc0AcAKRUGwe2Z80e2jY8B6zi8Ha8b9l7cVsej095eGCF+tINIL4wRX4lHm
# alDY/LkhuqjE5g5c/DaeTztyBOFLvdWHPU5eJyDOC9r7kSlnUcL1gAslH23b8uL0
# xvhPVKaTWjGIzNL1q/XfBr1WgRGqfD6dYb32HJDTq85yOnUT5sEr55aoEEu0euKh
# MYbXPmi5AMbrp8nP21kzUopX8iYERRdoKwhF0ZssciGi/qJVevH70tNdbDEQSxyp
# +vtP54TnL3LrzD4uY5Snng9zT9h0QrZujY79OEcxu20U0s29OQaudWkIjp7yLLUv
# UnPZHS+bIyaS53DdpV94GKGGBX1wrjGC/sn8eGYzmb2yMlMjLTBoE8L5r9cadshX
# XTeF4MtKGqaS3xDM2fIgACHHFl6qr/l0nENspv0raFzpf9Jx/WbpekghvTuWN6/B
# pZHnoOTNiAqXS/Rnyy829vsQ0Pw4hi6wx79Z73RP+35ubZTgTmOsQx9f2FjuEh6k
# JS+q9k4VJ+nntUWsYn4GS1Jlt+FXJ2hfzNj1NNFN4xLT1oioc6pCHsQyV7SBArB1
# ml2zYyfKCTC3riIRhcv/ew6OcKbhHcPFOpd/v0y40LO3mx8S0LZnUWXkcrl3XIZS
# Mj5CBdlFgA==
# =SRN4
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 04 Sep 2024 13:41:32 BST
# gpg: using RSA key AA1B48B0A22326A5A4C364CFC798DC741BEC319D
# gpg: issuer "farosas@suse.de"
# gpg: Good signature from "Fabiano Rosas <farosas@suse.de>" [unknown]
# gpg: aka "Fabiano Almeida Rosas <fabiano.rosas@suse.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: AA1B 48B0 A223 26A5 A4C3 64CF C798 DC74 1BEC 319D
* tag 'migration-20240904-pull-request' of https://gitlab.com/farosas/qemu: (34 commits)
tests/qtest/migration: Add a check for the availability of the "pc" machine
target/ppc: Fix migration of CPUs with TLB_EMB TLB type
migration/multifd: Add documentation for multifd methods
migration/multifd: Add a couple of asserts for p->iov
migration/multifd: Fix p->iov leak in multifd-uadk.c
migration/multifd: Stop changing the packet on recv side
migration/multifd: Make MultiFDMethods const
migration/multifd: Move nocomp code into multifd-nocomp.c
migration/multifd: Register nocomp ops dynamically
migration/multifd: Standardize on multifd ops names
migration/multifd: Allow multifd sync without flush
migration/multifd: Replace multifd_send_state->pages with client data
migration/multifd: Don't send ram data during SYNC
migration/multifd: Isolate ram pages packet data
migration/multifd: Remove total pages tracing
migration/multifd: Move pages accounting into multifd_send_zero_page_detect()
migration/multifd: Replace p->pages with an union pointer
migration/multifd: Make MultiFDPages_t:offset a flexible array member
migration/multifd: Introduce MultiFDSendData
migration/multifd: Pass in MultiFDPages_t to file_write_ramblock_iov
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In vfp.decode we have the names of the VFNMA and VFNMS instructions
the wrong way around. The architecture says that bit 6 is the 'op'
bit, which is 1 for VFNMA and 0 for VFNMS, but we label these two
lines of decode the other way around. This doesn't cause any
user-visible problem because in the handling of these functions in
translate-vfp.c we give VFNMA the behaviour specified for VFNMS and
vice-versa, but it's confusing when reading the code.
Switch the names of the VFP VFNMA and VFNMS instructions in
the decode file and flip the behaviour also.
NB: the instructions VFMA and VFMS *are* decoded with op=0 for
VFMA and op=1 for VFMS; the confusion probably arose because
we assumed VFNMA and VFNMS to be the same way around.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2536
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240830152156.2046590-1-peter.maydell@linaro.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Now that we've implemented the required behaviour for FEAT_EBF16, we
can enable it for the "max" CPU type, list it in our documentation,
and delete a TODO comment about it being missing.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Implement the FPCR.EBF=1 semantics for bfdotadd() operations:
* is_ebf() sets up fpst and fpst_odd
* bfdotadd_ebf() implements the fused paired-multiply-and-add
operation that we need
The paired-multiply-and-add is similar to f16_dotadd() and
we use the same trick here as in that function, but the inputs
here are bfloat16 rather than float16.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
We use bfdotadd() in four callsites for various helper functions. Currently
this all assumes that we have the FPCR.EBF=0 semantics. For FPCR.EBF=1
we will need to:
* call a different routine to bfdotadd() because we need to do a
fused multiply-add rather than separate multiply and add steps
* use a different float_status that honours the FPCR rounding mode
and denormal-flushing fields
* pass in an extra float_status that has been set up to perform
round-to-odd rounding
To prepare for this, refactor all the callsites so that instead of
for (...) {
x = bfdotadd(...);
}
they are:
float_status fpst, fpst_odd;
if (is_ebf(env, &fpst, &fpst_odd)) {
for (...) {
x = bfdotadd_ebf(..., &fpst, &fpst_odd);
}
} else {
for (...) {
x = bfdotadd(..., &fpst);
}
}
For the moment the is_ebf() function always returns false, sets up
fpst for EBF=0 semantics and never sets up fpst_odd; bfdotadd_ebf()
will assert if called. We'll fill in the handling for EBF=1 in the
next commit.
This change should be a zero-behaviour-change refactor.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Pass the env pointer through to the gvec_bfmmla helper,
so we can use it to add support for FEAT_EBF16.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Pass the env pointer through to the gvec_bfdot_idx helper,
so we can use it to add support for FEAT_EBF16.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Pass the env pointer through to the gvec_bfdot helper,
so we can use it to add support for FEAT_EBF16.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
To implement the FEAT_EBF16 semantics, we are going to need
the CPUARMState env pointer in every helper function which calls
bfdotadd().
Pass the env pointer through from generated code to the sme_bfmopa
helper. (We'll add the code that uses it when we've adjusted
all the helpers to have access to the env pointer.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
FEAT_EBF16 adds one new bit to the FPCR floating point control
register. Allow this bit to be read and written when the ID
registers indicate the presence of the feature.
Note that because this new bit is not in FPSCR_FPCR_MASK the bit is
not visible in the AArch32 FPSCR, and FPSCR writes do not affect it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
The linux-user hppa target crashes randomly for me since commit
081a0ed188 ("target/hppa: Do not mask in copy_iaoq_entry").
That commit dropped the masking of the IAOQ addresses while copying them
from other registers and instead keeps them with all 64 bits up until
the full gva is formed with the help of hppa_form_gva_psw().
So, when running in linux-user mode on an emulated 64-bit CPU, we need
to mask to a 32-bit address space at the very end in hppa_form_gva_psw()
if the PSW-W flag isn't set (which is the case for linux-user on hppa).
Fixes: 081a0ed188 ("target/hppa: Do not mask in copy_iaoq_entry")
Cc: qemu-stable@nongnu.org # v9.1+
Signed-off-by: Helge Deller <deller@gmx.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
While adding hppa64 support, the psw_v variable got extended from 32 to 64
bits. So, when packaging the PSW-V bit from the psw_v variable for interrupt
processing, check bit 31 instead the 63th (sign) bit.
This fixes a hard to find Linux kernel boot issue where the loss of the PSW-V
bit due to an ITLB interruption in the middle of a series of ds/addc
instructions (from the divU milicode library) generated the wrong division
result and thus triggered a Linux kernel crash.
Link: https://lore.kernel.org/lkml/718b8afe-222f-4b3a-96d3-93af0e4ceff1@roeck-us.net/
Reported-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Helge Deller <deller@gmx.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Fixes: 931adff314 ("target/hppa: Update cpu_hppa_get/put_psw for hppa64")
Cc: qemu-stable@nongnu.org # v8.2+
In vmstate_tlbemb a cut-and-paste error meant we gave
this vmstate subsection the same "cpu/tlb6xx" name as
the vmstate_tlb6xx subsection. This breaks migration load
for any CPU using the TLB_EMB CPU type, because when we
see the "tlb6xx" name in the incoming data we try to
interpret it as a vmstate_tlb6xx subsection, which it
isn't the right format for:
$ qemu-system-ppc -drive
if=none,format=qcow2,file=/home/petmay01/test-images/virt/dummy.qcow2
-monitor stdio -M bamboo
QEMU 9.0.92 monitor - type 'help' for more information
(qemu) savevm foo
(qemu) loadvm foo
Missing section footer for cpu
Error: Error -22 while loading VM state
Correct the incorrect vmstate section name. Since migration
for these CPU types was completely broken before, we don't
need to care that this is a migration compatibility break.
This affects the PPC 405, 440, 460 and e200 CPU families.
Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2522
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Arman Nabiev <nabiev.arman13@gmail.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
The two limit_max variables represent size - 1, just like the
encoding in the GDT, thus the 'old' access was off by one.
Access the minimal size of the new tss: the complete tss contains
the iopb, which may be a larger block than the access api expects,
and irrelevant because the iopb is not accessed during the
switch itself.
Fixes: 8b13106508 ("target/i386/tcg: use X86Access for TSS access")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2511
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240819074052.207783-1-richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
BLSI has inverted semantics for C as compared to the other two
BMI1 instructions, BLSMSK and BLSR. Introduce CC_OP_BLSI* for
this purpose.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2175
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20240801075845.573075-3-richard.henderson@linaro.org>
Split out the TCG_COND_TSTEQ logic from gen_prepare_eflags_z,
and use it for CC_OP_BMILG* as well. Prepare for requiring
both zero and non-zero senses.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240801075845.573075-2-richard.henderson@linaro.org>
- Null pointer dereference in IPI IOCSR (Jiaxun)
- Correct '-smbios type=4' in man page (Heinrich)
- Use correct MMU index in MIPS get_pte (Phil)
- Reset MPQEMU remote message using device_cold_reset (Peter)
- Update linux-user MIPS CPU list (Phil)
- Do not let exec_command read console if no pattern to wait for (Nick)
- Remove shadowed declaration warning (Pierrick)
- Restrict STQF opcode to SPARC V9 (Richard)
- Add missing Kconfig dependency for POWERNV ISA serial port (Bernhard)
- Do not allow vmport device without i8042 PS/2 controller (Kamil)
- Fix QCryptoTLSCredsPSK leak (Peter)
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmbDzAsACgkQ4+MsLN6t
wN7SvBAAwM0Frtg4ZKDZQu8XgMjLq1xVoSWjC3YJZKTpyGap5gO+7StvHg0sf9iB
YyGqocCO+qdj9a7pTSasfGDyufpwoIZkOqkwGUWKBos76cOcHWt4e/gkl9O65Lf1
VVKX4/xdY+a5w2eVAAdWWrYdaPWkKLm0ZZXKoeSIvN4R9A41j7J4kANhE2SweczF
NnTt2gBnSlpRzghlVWPJKhnq+aYbvLeR7ApdNGUJDpSI1ZTh9gH1GtZFwBN7aeDo
PvDucoui0EmuyHTVdOYOH3zihTfzKlNZECcT3Y6/6i8y5p7jLHyINHHexsKw6T56
i5RidJMPTfM0EO6LU1GvUN5FzZy24zXOf298Fe/GMYczQsOznQd4+aFHYPb3d4hZ
8Vc1wB1s8XF5WGj+7bchBAUdynUnbwUqfMOb2pMXLIm21pSDnOTVgmYMnp1Kt4AA
9WbHiS6tUJf/HjQsep8BBNGUiVSsUPDNNhL8QN43u2C0NgNRPgtRuIV+ytgVXS1G
2t1QiRX0lX4ACHmw88agUCU3OhorumuDOpoitQK5jn2VutT7TqbGgibkQMFSgn9E
Xwrmtlf7nYU9MVgXYJjH2bBh7wbOmQCqbHniEj0targkxccAMJoswG4vtKsP9zkd
tBs6qMiZ8qSj5eoq8JBRF8bF4tONmboPZjRlboACJ0kTD5wCElA=
=lPMG
-----END PGP SIGNATURE-----
Merge tag 'hw-misc-20240820' of https://github.com/philmd/qemu into staging
Various fixes
- Null pointer dereference in IPI IOCSR (Jiaxun)
- Correct '-smbios type=4' in man page (Heinrich)
- Use correct MMU index in MIPS get_pte (Phil)
- Reset MPQEMU remote message using device_cold_reset (Peter)
- Update linux-user MIPS CPU list (Phil)
- Do not let exec_command read console if no pattern to wait for (Nick)
- Remove shadowed declaration warning (Pierrick)
- Restrict STQF opcode to SPARC V9 (Richard)
- Add missing Kconfig dependency for POWERNV ISA serial port (Bernhard)
- Do not allow vmport device without i8042 PS/2 controller (Kamil)
- Fix QCryptoTLSCredsPSK leak (Peter)
# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmbDzAsACgkQ4+MsLN6t
# wN7SvBAAwM0Frtg4ZKDZQu8XgMjLq1xVoSWjC3YJZKTpyGap5gO+7StvHg0sf9iB
# YyGqocCO+qdj9a7pTSasfGDyufpwoIZkOqkwGUWKBos76cOcHWt4e/gkl9O65Lf1
# VVKX4/xdY+a5w2eVAAdWWrYdaPWkKLm0ZZXKoeSIvN4R9A41j7J4kANhE2SweczF
# NnTt2gBnSlpRzghlVWPJKhnq+aYbvLeR7ApdNGUJDpSI1ZTh9gH1GtZFwBN7aeDo
# PvDucoui0EmuyHTVdOYOH3zihTfzKlNZECcT3Y6/6i8y5p7jLHyINHHexsKw6T56
# i5RidJMPTfM0EO6LU1GvUN5FzZy24zXOf298Fe/GMYczQsOznQd4+aFHYPb3d4hZ
# 8Vc1wB1s8XF5WGj+7bchBAUdynUnbwUqfMOb2pMXLIm21pSDnOTVgmYMnp1Kt4AA
# 9WbHiS6tUJf/HjQsep8BBNGUiVSsUPDNNhL8QN43u2C0NgNRPgtRuIV+ytgVXS1G
# 2t1QiRX0lX4ACHmw88agUCU3OhorumuDOpoitQK5jn2VutT7TqbGgibkQMFSgn9E
# Xwrmtlf7nYU9MVgXYJjH2bBh7wbOmQCqbHniEj0targkxccAMJoswG4vtKsP9zkd
# tBs6qMiZ8qSj5eoq8JBRF8bF4tONmboPZjRlboACJ0kTD5wCElA=
# =lPMG
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 20 Aug 2024 08:49:47 AM AEST
# gpg: using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE
# gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [full]
* tag 'hw-misc-20240820' of https://github.com/philmd/qemu:
crypto/tlscredspsk: Free username on finalize
hw/i386/pc: Ensure vmport prerequisites are fulfilled
hw/i386/pc: Unify vmport=auto handling
hw/ppc/Kconfig: Add missing SERIAL_ISA dependency to POWERNV machine
target/sparc: Restrict STQF to sparcv9
contrib/plugins/execlog: Fix shadowed declaration warning
tests/avocado: Mark ppc_hv_tests.py as non-flaky after fixed console interaction
tests/avocado: exec_command should not consume console output
linux-user/mips: Select Loongson CPU for Loongson binaries
linux-user/mips: Select MIPS64R2-generic for Rel2 binaries
linux-user/mips: Select Octeon68XX CPU for Octeon binaries
linux-user/mips: Do not try to use removed R5900 CPU
hw/remote/message.c: Don't directly invoke DeviceClass:reset
hw/dma/xilinx_axidma: Use semicolon at end of statement, not comma
target/mips: Load PTE as DATA
target/mips: Use correct MMU index in get_pte()
target/mips: Pass page table entry size as MemOp to get_pte()
qemu-options.hx: correct formatting -smbios type=4
hw/mips/loongson3_virt: Fix condition of IPI IOCSR connection
hw/mips/loongson3_virt: Store core_iocsr into LoongsonMachineState
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Prior to sparcv9, the same encoding was STDFQ.
Cc: qemu-stable@nongnu.org
Fixes: 06c060d9e5 ("target/sparc: Move simple fp load/store to decodetree")
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20240816072311.353234-2-richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
PTE is not CODE so load it as normal DATA access.
Fixes: 074cfcb4da ("Implement hardware page table walker for MIPS32")
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20240814090452.2591-4-philmd@linaro.org>
When refactoring page_table_walk_refill() in commit 4e999bf419
we missed the indirect call to cpu_mmu_index() in get_pte():
page_table_walk_refill()
-> get_pte()
-> cpu_ld[lq]_code()
-> cpu_mmu_index()
Since we don't mask anymore the modes in hflags, cpu_mmu_index()
can return UM or SM, while we only expect KM or ERL.
Fix by propagating ptw_mmu_idx to get_pte(), and use the
cpu_ld/st_code_mmu() API with the correct MemOpIdx.
Reported-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Reported-by: Waldemar Brodkorb <wbx@uclibc-ng.org>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2470
Fixes: 4e999bf419 ("target/mips: Pass ptw_mmu_idx down from mips_cpu_tlb_fill")
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20240814090452.2591-3-philmd@linaro.org>
In order to simplify the next commit, pass the PTE size as MemOp.
Rename:
native_shift -> native_op
directory_shift -> directory_mop
leaf_shift -> leaf_mop
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20240814090452.2591-2-philmd@linaro.org>
When we are using TCG plugin memory callbacks probe_access_internal
will return TLB_MMIO to force the slow path for memory access. This
results in probe_access returning NULL but the x86 access_ptr function
happily accepts an empty haddr resulting in segfault hilarity.
Check for an empty haddr to prevent the segfault and enable plugins to
track all the memory operations for the x86 save/restore helpers. As
we also want to run the slow path when instrumenting *-user we should
also not have the short cutting test_ptr macro.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2489
Fixes: 6d03226b42 (plugins: force slow path when plugins instrument memory ops)
Reviewed-by: Alexandre Iooss <erdnaxe@crans.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20240813202329.1237572-8-alex.bennee@linaro.org>
Found on debian stable.
../target/s390x/tcg/translate.c: In function ‘get_mem_index’:
../target/s390x/tcg/translate.c:398:1: error: control reaches end of non-void function [-Werror=return-type]
398 | }
Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-ID: <20240814224132.897098-4-pierrick.bouvier@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Snapshot of the stat utime and stime for each thread, taken before and
after the pause, must be stored in separate locations
Signed-off-by: Anthony Harivel <aharivel@redhat.com>
Link: https://lore.kernel.org/r/20240807124320.1741124-2-aharivel@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* fix incorrect application of REX to MMX operands
* fix crash on module load
* update Italian translation
-----BEGIN PGP SIGNATURE-----
iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAma7kZ4UHHBib256aW5p
QHJlZGhhdC5jb20ACgkQv/vSX3jHroOy7QgAriuxfgw3Yvu9UPPfEZT5V9p5XfDf
LceO3C6OABIkFoGSO8WK5dWfQy3oYbrwEXX/l/PW1lUc2DFrSUo9YtIfjelRkxoC
0EAAbV5A+xCLYmujFqBSe/6usRj82uKjSET1KK1aCam7ONZLNZf2yb4OwdShvLSN
MPgtBOrwznR1qh3KJtLB6YSRC0Rie1hOxbXFpx1AklXYnIiqUdMjXOHSjs+Amva0
VczuqwjtVdNDTPqbZlCXatPtZ8nwYeEOD2jOqgjAoEwwabZ1fFGDCNXlqEDLSdTm
Cc+IZPYU5a8+tVfH0DYEMgMSkRhDUqVZ/076L+pRi+Q8ClxWV8fKsf5qKw==
=jJtu
-----END PGP SIGNATURE-----
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
* fix --static compilation of hexagon
* fix incorrect application of REX to MMX operands
* fix crash on module load
* update Italian translation
# -----BEGIN PGP SIGNATURE-----
#
# iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAma7kZ4UHHBib256aW5p
# QHJlZGhhdC5jb20ACgkQv/vSX3jHroOy7QgAriuxfgw3Yvu9UPPfEZT5V9p5XfDf
# LceO3C6OABIkFoGSO8WK5dWfQy3oYbrwEXX/l/PW1lUc2DFrSUo9YtIfjelRkxoC
# 0EAAbV5A+xCLYmujFqBSe/6usRj82uKjSET1KK1aCam7ONZLNZf2yb4OwdShvLSN
# MPgtBOrwznR1qh3KJtLB6YSRC0Rie1hOxbXFpx1AklXYnIiqUdMjXOHSjs+Amva0
# VczuqwjtVdNDTPqbZlCXatPtZ8nwYeEOD2jOqgjAoEwwabZ1fFGDCNXlqEDLSdTm
# Cc+IZPYU5a8+tVfH0DYEMgMSkRhDUqVZ/076L+pRi+Q8ClxWV8fKsf5qKw==
# =jJtu
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 14 Aug 2024 03:02:22 AM AEST
# gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg: issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
* tag 'for-upstream' of https://gitlab.com/bonzini/qemu:
po: update Italian translation
module: Prevent crash by resetting local_err in module_load_qom_all()
target/i386: Assert MMX and XMM registers in range
target/i386: Use unit not type in decode_modrm
target/i386: Do not apply REX to MMX operands
target/hexagon: don't look for static glib
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Our current usage of MMU indexes when EL3 is AArch32 is confused.
Architecturally, when EL3 is AArch32, all Secure code runs under the
Secure PL1&0 translation regime:
* code at EL3, which might be Mon, or SVC, or any of the
other privileged modes (PL1)
* code at EL0 (Secure PL0)
This is different from when EL3 is AArch64, in which case EL3 is its
own translation regime, and EL1 and EL0 (whether AArch32 or AArch64)
have their own regime.
We claimed to be mapping Secure PL1 to our ARMMMUIdx_EL3, but didn't
do anything special about Secure PL0, which meant it used the same
ARMMMUIdx_EL10_0 that NonSecure PL0 does. This resulted in a bug
where arm_sctlr() incorrectly picked the NonSecure SCTLR as the
controlling register when in Secure PL0, which meant we were
spuriously generating alignment faults because we were looking at the
wrong SCTLR control bits.
The use of ARMMMUIdx_EL3 for Secure PL1 also resulted in the bug that
we wouldn't honour the PAN bit for Secure PL1, because there's no
equivalent _PAN mmu index for it.
We could fix this in one of two ways:
* The most straightforward is to add new MMU indexes EL30_0,
EL30_3, EL30_3_PAN to correspond to "Secure PL1&0 at PL0",
"Secure PL1&0 at PL1", and "Secure PL1&0 at PL1 with PAN".
This matches how we use indexes for the AArch64 regimes, and
preserves propirties like being able to determine the privilege
level from an MMU index without any other information. However
it would add two MMU indexes (we can share one with ARMMMUIdx_EL3),
and we are already using 14 of the 16 the core TLB code permits.
* The more complicated approach is the one we take here. We use
the same MMU indexes (E10_0, E10_1, E10_1_PAN) for Secure PL1&0
than we do for NonSecure PL1&0. This saves on MMU indexes, but
means we need to check in some places whether we're in the
Secure PL1&0 regime or not before we interpret an MMU index.
The changes in this commit were created by auditing all the places
where we use specific ARMMMUIdx_ values, and checking whether they
needed to be changed to handle the new index value usage.
Note for potential stable backports: taking also the previous
(comment-change-only) commit might make the backport easier.
Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2326
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Bernhard Beschow <shentey@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240809160430.1144805-3-peter.maydell@linaro.org
We have a long comment describing the Arm architectural translation
regimes and how we map them to QEMU MMU indexes. This comment has
got a bit out of date:
* FEAT_SEL2 allows Secure EL2 and corresponding new regimes
* FEAT_RME introduces Realm state and its translation regimes
* We now model the Cortex-R52 so that is no longer a hypothetical
* We separated Secure Stage 2 and NonSecure Stage 2 MMU indexes
* We have an MMU index per physical address spacea
Add the missing pieces so that the list of architectural translation
regimes matches the Arm ARM, and the list and count of QEMU MMU
indexes in the comment matches the enum.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Bernhard Beschow <shentey@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240809160430.1144805-2-peter.maydell@linaro.org
AdvSIMD instructions are supposed to zero bits beyond 128.
Affects SSHLL, USHLL, SSHLL2, USHLL2.
Cc: qemu-stable@nongnu.org
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240717060903.205098-15-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rather that enumerating the types that can produce
MMX operands, examine the unit. No functional change.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Link: https://lore.kernel.org/r/20240812025844.58956-3-richard.henderson@linaro.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When cross compiling QEMU configured with --static, I've been getting
configure errors like the following:
Build-time dependency glib-2.0 found: NO
../target/hexagon/meson.build:303:15: ERROR: Dependency lookup for glib-2.0 with method 'pkgconfig' failed: Could not generate libs for glib-2.0:
Package libpcre2-8 was not found in the pkg-config search path.
Perhaps you should add the directory containing `libpcre2-8.pc'
to the PKG_CONFIG_PATH environment variable
Package 'libpcre2-8', required by 'glib-2.0', not found
This happens because --static sets the prefer_static Meson option, but
my build machine doesn't have a static libpcre2. I don't think it
makes sense to insist that native dependencies are static, just
because I want the non-native QEMU binaries to be static.
Signed-off-by: Alyssa Ross <hi@alyssa.is>
Link: https://lore.kernel.org/r/20240805104921.4035256-1-hi@alyssa.is
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
With pcrel, we cannot check the guarded page bit at translation
time, as different mappings of the same physical page may or may
not have the GP bit set.
Instead, add a couple of helpers to check the page at runtime,
after all other filters that might obviate the need for the check.
The set_btype_for_br call must be moved after the gen_a64_set_pc
call to ensure the current pc can still be computed.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240802003028.795476-1-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Define a hexagon_cpu_properties list to match the idiom used
by other targets.
Signed-off-by: Brian Cain <bcain@quicinc.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Taylor Simpson <ltaylorsimpson@gmail.com>
For now, v66 behavior is the same as other CPUs.
Signed-off-by: Brian Cain <bcain@quicinc.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Taylor Simpson <ltaylorsimpson@gmail.com>
The self assignment is clearly useless, and @1.last_column does not have
to be set for an expression with only a single token, so remove it.
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Brian Cain <bcain@quicinc.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230713120853.27023-1-anjo@rev.ng>
Signed-off-by: Brian Cain <bcain@quicinc.com>
The implementation for these instructions handles -0 as an invalid float
point value, whereas the Hexagon hardware considers it the same as +0
(which is valid). Let's fix that and add a regression test.
Signed-off-by: Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
Reviewed-by: Brian Cain <bcain@quicinc.com>
Reviewed-by: Taylor Simpson <ltaylorsimpson@gmail.com>
Signed-off-by: Brian Cain <bcain@quicinc.com>
Coverity complained about the possible out-of-bounds access with
counter_virt/counter_virt_prev because these two arrays are
accessed with privilege mode. However, these two arrays are accessed
only when virt is enabled. Thus, the privilege mode can't be M mode.
Add the asserts anyways to detect any wrong usage of these arrays
in the future.
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Fixes: Coverity CID 1558459
Fixes: Coverity CID 1558462
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240724-fixes-v1-1-4a64596b0d64@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
According to the risc-v specification:
"FLD and FSD are only guaranteed to execute atomically if the effective
address is naturally aligned and XLEN≥64."
We currently implement fld as MO_ATOM_IFALIGN when XLEN < 64, which does
not violate the rules. But it will hide some problems. So relax it to
MO_ATOM_NONE.
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20240802072417.659-4-zhiwei_liu@linux.alibaba.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Zama16b loads and stores of no more than MXLEN bits defined in the F, D, and Q
extensions.
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20240802072417.659-3-zhiwei_liu@linux.alibaba.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Compressed encodings also applies to zama16b.
https://github.com/riscv/riscv-isa-manual/pull/1557
Suggested-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20240802072417.659-2-zhiwei_liu@linux.alibaba.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Changed val from uint64_t to a pointer to uint64_t in hvf_sysreg_read,
but didn't change its usage in hvf_sysreg_read_cp call.
Fixes: e9e640148c ("hvf: arm: Raise an exception for sysreg by default")
Reported-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20240802-hvf-v1-1-e2c0292037e5@daynix.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The vcek-disabled property of the sev-snp-guest object is misspelled
vcek-required (which I suppose would use the opposite polarity) in
the call to object_class_property_add_bool(). Fix it.
Reported-by: Zixi Chen <zixchen@redhat.com>
Reviewed-by: Pankaj Gupta <pankaj.gupta@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In commit ad18376b90 we added an assert that the level value was
in-bounds for the array we're about to index into. However, the
assert condition is wrong -- env->config->interrupt_vector is an
array of uint32_t, so we should bounds check the index against
ARRAY_SIZE(...), not against sizeof().
Resolves: Coverity CID 1507131
Fixes: ad18376b90 ("target/xtensa: Assert that interrupt level is within bounds")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20240731172246.3682311-1-peter.maydell@linaro.org
The FMOPA (widening) SME instruction takes pairs of half-precision
floating point values, widens them to single-precision, does a
two-way dot product and accumulates the results into a
single-precision destination. We don't quite correctly handle the
FPCR bits FZ and FZ16 which control flushing of denormal inputs and
outputs. This is because at the moment we pass a single float_status
value to the helper function, which then uses that configuration for
all the fp operations it does. However, because the inputs to this
operation are float16 and the outputs are float32 we need to use the
fp_status_f16 for the float16 input widening but the normal fp_status
for everything else. Otherwise we will apply the flushing control
FPCR.FZ16 to the 32-bit output rather than the FPCR.FZ control, and
incorrectly flush a denormal output to zero when we should not (or
vice-versa).
(In commit 207d30b5fd we tried to fix the FZ handling but
didn't get it right, switching from "use FPCR.FZ for everything" to
"use FPCR.FZ16 for everything".)
Pass the CPU env to the sme_fmopa_h helper instead of an fp_status
pointer, and have the helper pass an extra fp_status into the
f16_dotadd() function so that we can use the right status for the
right parts of this operation.
Cc: qemu-stable@nongnu.org
Fixes: 207d30b5fd ("target/arm: Use FPST_F16 for SME FMOPA (widening)")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2373
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Fix leaking memory of file handle in case of error
Erase unused "pid = -1"
Add clearer error_report
Should fix Coverity CID 1558557.
Signed-off-by: Anthony Harivel <aharivel@redhat.com>
Link: https://lore.kernel.org/r/20240726102632.1324432-3-aharivel@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
As SDM stated, CPUID 0x12 leaves depend on CPUID_7_0_EBX_SGX (SGX
feature word).
Since FEAT_SGX_12_0_EAX, FEAT_SGX_12_0_EBX and FEAT_SGX_12_1_EAX define
multiple feature words, add the dependencies of those registers to
report the warning to user if SGX is absent.
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Link: https://lore.kernel.org/r/20240730045544.2516284-4-zhao1.liu@intel.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
At present, cpu_x86_cpuid() silently masks off SGX_LC if SGX is absent.
This is not proper because the user is not told about the dependency
between the two.
So explicitly define the dependency between SGX_LC and SGX feature
words, so that user could get a warning when SGX_LC is enabled but
SGX is absent.
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Link: https://lore.kernel.org/r/20240730045544.2516284-3-zhao1.liu@intel.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
CPUID.0x7.0.ebx and CPUID.0x7.0.ecx leaves have been expressed as the
feature word lists, and the Host capability support has been checked
in x86_cpu_filter_features().
Therefore, such checks on SGX feature "words" are redundant, and
the follow-up adjustments to those feature "words" will not actually
take effect.
Remove unnecessary SGX feature words related checks.
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Link: https://lore.kernel.org/r/20240730045544.2516284-2-zhao1.liu@intel.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The feature word 'r' is a u64, and "unavail" is a u32, the operation
'r &= ~unavail' clears the high 32 bits of 'r'. This causes many vmx cases
in kvm-unit-tests to fail. Changing 'unavail' from u32 to u64 fixes this
issue.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2442
Fixes: 0b2757412c ("target/i386: drop AMD machine check bits from Intel CPUID")
Signed-off-by: Xiong Zhang <xiong.y.zhang@linux.intel.com>
Link: https://lore.kernel.org/r/20240730082927.250180-1-xiong.y.zhang@linux.intel.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* hw/char/bcm2835_aux: Fix assert when receive FIFO fills up
* hw/arm/smmuv3: Assert input to oas2bits() is valid
* target/arm/kvm: Set PMU for host only when available
* target/arm/kvm: Do not silently remove PMU
* hvf: arm: Properly disable PMU
* hvf: arm: Do not advance PC when raising an exception
* hw/misc/bcm2835_property: several minor bugfixes
* target/arm: Don't assert for 128-bit tile accesses when SVL is 128
* target/arm: Fix UMOPA/UMOPS of 16-bit values
* target/arm: Ignore SMCR_EL2.LEN and SVCR_EL2.LEN if EL2 is not enabled
* system/physmem: Where we assume we have a RAM MR, assert it
* sh4, i386, m68k, xtensa, tricore, arm: fix minor Coverity issues
-----BEGIN PGP SIGNATURE-----
iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmaotMAZHHBldGVyLm1h
eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3rsAEACIzQDAMKWy8DlB8o4W+a/l
yqGijQ5e0JdAifEA2rsDbnaIs/kqDzVxBc0dgIXDxETe5LVZHB742q4vMbaSpSb2
P8xuL0Q7NRpcIN4THPwLxW0wED+asaJc2TeyImPQRTRhLgk6yn+/4hpqQRkT0mxe
oxxN8bnx9RssqHZ6pQCv5HYNLex3a7dljXlbjWr4KpRRFSMls1cxPSphsK1aZ1xV
3NXh/vgHcM0LquwxdF0uaPdPGQ1SyZb5KZ9khd0o4cpDivkns/hXQpyJ45nHsypK
kG/TbFQsXPorprWCqBDOXY9rCM6eBDuK89mClKA34EzukIFlSMfIgxfezCzNIXaU
o/cJCGpSzZnCdvZagEWDzkdryE3QFmmpBFRs8mcS3sb+/gm0O8YyMoCrdV87O3c5
Y/NY1adOKTVf8FLlT3jR93k4pT6wiqIQND13fN3EbnUqfrGpocSyMD0VsYBj/gij
PHPBFHAwCEDKVZSq6SViXdkS15arqL2V2mnOogeY1v0jTj2YRG3FyjrPOatg6tF5
3MoUBjTAp9ENtYHAY6mCr2vAYw6l1xZTKUwkXiO/i8rc4XQQ+A3AHhQLtWdu2K5+
dv1E7QKur5O6FDmJxB5s/vGppfnkSUD6EEvViNSCj+hX0U9SyT80e/KClMehgJqQ
+oME+fRoBHj1DUw4qasWsg==
=NNxN
-----END PGP SIGNATURE-----
Merge tag 'pull-target-arm-20240730' of https://git.linaro.org/people/pmaydell/qemu-arm into staging
target-arm queue:
* hw/char/bcm2835_aux: Fix assert when receive FIFO fills up
* hw/arm/smmuv3: Assert input to oas2bits() is valid
* target/arm/kvm: Set PMU for host only when available
* target/arm/kvm: Do not silently remove PMU
* hvf: arm: Properly disable PMU
* hvf: arm: Do not advance PC when raising an exception
* hw/misc/bcm2835_property: several minor bugfixes
* target/arm: Don't assert for 128-bit tile accesses when SVL is 128
* target/arm: Fix UMOPA/UMOPS of 16-bit values
* target/arm: Ignore SMCR_EL2.LEN and SVCR_EL2.LEN if EL2 is not enabled
* system/physmem: Where we assume we have a RAM MR, assert it
* sh4, i386, m68k, xtensa, tricore, arm: fix minor Coverity issues
# -----BEGIN PGP SIGNATURE-----
#
# iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmaotMAZHHBldGVyLm1h
# eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3rsAEACIzQDAMKWy8DlB8o4W+a/l
# yqGijQ5e0JdAifEA2rsDbnaIs/kqDzVxBc0dgIXDxETe5LVZHB742q4vMbaSpSb2
# P8xuL0Q7NRpcIN4THPwLxW0wED+asaJc2TeyImPQRTRhLgk6yn+/4hpqQRkT0mxe
# oxxN8bnx9RssqHZ6pQCv5HYNLex3a7dljXlbjWr4KpRRFSMls1cxPSphsK1aZ1xV
# 3NXh/vgHcM0LquwxdF0uaPdPGQ1SyZb5KZ9khd0o4cpDivkns/hXQpyJ45nHsypK
# kG/TbFQsXPorprWCqBDOXY9rCM6eBDuK89mClKA34EzukIFlSMfIgxfezCzNIXaU
# o/cJCGpSzZnCdvZagEWDzkdryE3QFmmpBFRs8mcS3sb+/gm0O8YyMoCrdV87O3c5
# Y/NY1adOKTVf8FLlT3jR93k4pT6wiqIQND13fN3EbnUqfrGpocSyMD0VsYBj/gij
# PHPBFHAwCEDKVZSq6SViXdkS15arqL2V2mnOogeY1v0jTj2YRG3FyjrPOatg6tF5
# 3MoUBjTAp9ENtYHAY6mCr2vAYw6l1xZTKUwkXiO/i8rc4XQQ+A3AHhQLtWdu2K5+
# dv1E7QKur5O6FDmJxB5s/vGppfnkSUD6EEvViNSCj+hX0U9SyT80e/KClMehgJqQ
# +oME+fRoBHj1DUw4qasWsg==
# =NNxN
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 30 Jul 2024 07:39:12 PM AEST
# gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg: issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [full]
# gpg: aka "Peter Maydell <pmaydell@gmail.com>" [full]
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [full]
# gpg: aka "Peter Maydell <peter@archaic.org.uk>" [unknown]
* tag 'pull-target-arm-20240730' of https://git.linaro.org/people/pmaydell/qemu-arm: (21 commits)
system/physmem: Where we assume we have a RAM MR, assert it
target/sh4: Avoid shift into sign bit in update_itlb_use()
target/i386: Remove dead assignment to ss in do_interrupt64()
target/m68k: avoid shift into sign bit in dump_address_map()
target/xtensa: Make use of 'segment' in pptlb helper less confusing
target/tricore: Use unsigned types for bitops in helper_eq_b()
target/arm: Ignore SMCR_EL2.LEN and SVCR_EL2.LEN if EL2 is not enabled
target/arm: Avoid shifts by -1 in tszimm_shr() and tszimm_shl()
target/arm: Fix UMOPA/UMOPS of 16-bit values
target/arm: Don't assert for 128-bit tile accesses when SVL is 128
hw/misc/bcm2835_property: Reduce scope of variables in mbox push function
hw/misc/bcm2835_property: Restrict scope of start_num, number, otp_row
hw/misc/bcm2835_property: Avoid overflow in OTP access properties
hw/misc/bcm2835_property: Fix handling of FRAMEBUFFER_SET_PALETTE
hvf: arm: Do not advance PC when raising an exception
hvf: arm: Properly disable PMU
hvf: arm: Raise an exception for sysreg by default
target/arm/kvm: Do not silently remove PMU
target/arm/kvm: Set PMU for host only when available
hw/arm/smmuv3: Assert input to oas2bits() is valid
...
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
CpuModelInfo is used both as command argument and in command
returns.
Its @deprecated-props array does not make any sense in arguments,
and is silently ignored. We actually want it only as return value
of query-cpu-model-expansion.
Move it from CpuModelInfo to CpuModelExpansionType, and document
its dependence on expansion type property.
This was identified late during review [1] and we have to fix it up
while it's not part of an official QEMU release yet.
[1] https://lore.kernel.org/qemu-devel/20240719181741.35146-1-walling@linux.ibm.com/
Message-ID: <20240726203646.20279-1-walling@linux.ibm.com>
Fixes: eed0e8ffa3 ("target/s390x: filter deprecated properties based on model expansion type")
Signed-off-by: Collin Walling <walling@linux.ibm.com>
[ david: - add "Fixes", adjust description, reference v3 instead
- make property s390x-only and non-optional
- fixup "populate" vs. "populated" ]
Signed-off-by: David Hildenbrand <david@redhat.com>
In update_itlb_use() the variables or_mask and and_mask are uint8_t,
which means that in expressions like "and_mask << 24" the usual C
arithmetic conversions will result in the shift being done as a
signed int type, and so we will shift into the sign bit. For QEMU
this isn't undefined behaviour because we use -fwrapv; but we can
avoid it anyway by using uint32_t types for or_mask and and_mask.
Resolves: Coverity CID 1547628
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Yoshinori Sato <ysato@users.sourceforge.jp>
Message-id: 20240723172431.1757296-1-peter.maydell@linaro.org
Coverity points out that in do_interrupt64() in the "to inner
privilege" codepath we set "ss = 0", but because we also set
"new_stack = 1" there, later in the function we will always override
that value of ss with "ss = 0 | dpl".
Remove the unnecessary initialization of ss, which allows us to
reduce the scope of the variable to only where it is used. Borrow a
comment from helper_lcall_protected() that explains what "0 | dpl"
means here.
Resolves: Coverity CID 1527395
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20240723162525.1585743-1-peter.maydell@linaro.org
Coverity complains (CID 1547592) that in dump_address_map() we take a
value stored in a signed integer variable 'i' and shift it by enough
to shift into the sign bit when we construct the value 'logical'.
This isn't a bug for QEMU because we use -fwrapv semantics, but
we can make Coverity happy by using an unsigned type for the loop
variables i, j, k in this function.
While we're changing the declaration of the variables, put them
in the for() loops so their scope is the minimum required (a style
now permitted by our coding style guide).
Resolves: Coverity CID 1547592
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20240723154207.1483665-1-peter.maydell@linaro.org
Coverity gets confused about the use of the 'segment' variable in the
pptlb helper function: it thinks that we can take a code path where
we first initialize it:
unsigned segment = XTENSA_MPU_PROBE_B; // 0x40000000
and then use that value as a shift count:
} else if (nhits == 1 && (env->sregs[MPUENB] & (1u << segment))) {
In fact this isn't possible, beacuse xtensa_mpu_lookup() is passed
'&segment', and it uses that as an output value, which it will always
set if it returns nonzero. But the way the code is currently written
is confusing to a human reader as well as to Coverity.
Instead of initializing 'segment' at the top of the function with a
value that's only used in the "nhits == 0" code path, use the
constant value directly in that code path, and don't initialize
segment. This matches the way we use xtensa_mpu_lookup() in its
other callsites in get_physical_addr_mpu().
Resolves: Coverity CID 1547589
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
Message-id: 20240723151454.1396826-1-peter.maydell@linaro.org
Coverity points out that in helper_eq_b() we have an int32_t 'msk'
and we end up shifting into its sign bit. This is OK for QEMU because
we use -fwrapv to give this well defined semantics, but when you look
at what this function is doing it's doing bit operations, so we
should be using an unsigned variable anyway. This also matches the
return type of the function.
Make 'ret' and 'msk' uint32_t.
Resolves: Coverity CID 1547758
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240723151042.1396610-1-peter.maydell@linaro.org
When determining the current vector length, the SMCR_EL2.LEN and
SVCR_EL2.LEN settings should only be considered if EL2 is enabled
(compare the pseudocode CurrentSVL and CurrentNSVL which call
EL2Enabled()).
We were checking against ARM_FEATURE_EL2 rather than calling
arm_is_el2_enabled(), which meant that we would look at
SMCR_EL2/SVCR_EL2 when in Secure EL1 or Secure EL0 even if Secure EL2
was not enabled.
Use the correct check in sve_vqm1_for_el_sm().
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240722172957.1041231-5-peter.maydell@linaro.org
The function tszimm_esz() returns a shift amount, or possibly -1 in
certain cases that correspond to unallocated encodings in the
instruction set. We catch these later in the trans_ functions
(generally with an "a-esz < 0" check), but before we do the
decodetree-generated code will also call tszimm_shr() or tszimm_sl(),
which will use the tszimm_esz() return value as a shift count without
checking that it is not negative, which is undefined behaviour.
Avoid the UB by checking the return value in tszimm_shr() and
tszimm_shl().
Cc: qemu-stable@nongnu.org
Resolves: Coverity CID 1547617, 1547694
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240722172957.1041231-4-peter.maydell@linaro.org
The UMOPA/UMOPS instructions are supposed to multiply unsigned 8 or
16 bit elements and accumulate the products into a 64-bit element.
In the Arm ARM pseudocode, this is done with the usual
infinite-precision signed arithmetic. However our implementation
doesn't quite get it right, because in the DEF_IMOP_64() macro we do:
sum += (NTYPE)(n >> 0) * (MTYPE)(m >> 0);
where NTYPE and MTYPE are uint16_t or int16_t. In the uint16_t case,
the C usual arithmetic conversions mean the values are converted to
"int" type and the multiply is done as a 32-bit multiply. This means
that if the inputs are, for example, 0xffff and 0xffff then the
result is 0xFFFE0001 as an int, which is then promoted to uint64_t
for the accumulation into sum; this promotion incorrectly sign
extends the multiply.
Avoid the incorrect sign extension by casting to int64_t before
the multiply, so we do the multiply as 64-bit signed arithmetic,
which is a type large enough that the multiply can never
overflow into the sign bit.
(The equivalent 8-bit operations in DEF_IMOP_32() are fine, because
the 8-bit multiplies can never overflow into the sign bit of a
32-bit integer.)
Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2372
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240722172957.1041231-3-peter.maydell@linaro.org
For an instruction which accesses a 128-bit element tile when
the SVL is also 128 (for example MOV z0.Q, p0/M, ZA0H.Q[w0,0]),
we will assert in get_tile_rowcol():
qemu-system-aarch64: ../../tcg/tcg-op.c:926: tcg_gen_deposit_z_i32: Assertion `len > 0' failed.
This happens because we calculate
len = ctz32(streaming_vec_reg_size(s)) - esz;$
but if the SVL and the element size are the same len is 0, and
the deposit operation asserts.
In this case the ZA storage contains exactly one 128 bit
element ZA tile, and the horizontal or vertical slice is just
that tile. This means that regardless of the index value in
the Ws register, we always access that tile. (In pseudocode terms,
we calculate (index + offset) MOD 1, which is 0.)
Special case the len == 0 case to avoid hitting the assertion
in tcg_gen_deposit_z_i32().
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240722172957.1041231-2-peter.maydell@linaro.org
This is identical with commit 30a1690f24 ("hvf: arm: Do not advance
PC when raising an exception") but for writes instead of reads.
Fixes: a2260983c6 ("hvf: arm: Add support for GICv3")
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Setting pmu property used to have no effect for hvf so fix it.
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Any sysreg access results in an exception unless defined otherwise so
we should raise an exception by default.
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
kvm_arch_init_vcpu() used to remove PMU when it is not available even
if the CPU model needs one. It is semantically incorrect, and may
continue execution on a misbehaving host that advertises a CPU model
while lacking its PMU. Keep the PMU when the CPU model needs one, and
let kvm_arm_vcpu_init() fail if the KVM implementation mismatches with
our expectation.
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
target/arm/kvm.c checked PMU availability but unconditionally set the
PMU feature flag for the host CPU model, which is confusing. Set the
feature flag only when available.
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Using int32_t meant that the address was sign-extended to uint64_t
when passing to translator_ld*, triggering an assert.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2453
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Thomas Huth <thuth@redhat.com>
Drop includes from header that is not needed by the header itself and
only include them from C files that really need it.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Acked-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Move the parts not needed outside of mmu-radix64.c from the header to
the C file to leave only parts in the header that need to be exported.
Also drop unneded include of this header.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Acked-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
The ppc_hash64_hpt_base() and ppc_hash64_hpt_mask() functions are
mostly used by mmu-hash64.c only but there is one call to
ppc_hash64_hpt_mask() in hw/ppc/spapr_vhyp_mmu.c.in a helper function
that can be moved to mmu-hash64.c which allows these functions to be
removed from the header.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
This function is a simple shared function, move it to other similar
static inline functions in the header.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
This function is used only once and does not add more clarity than
doing it inline.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Completely get rid of mmu_ctx_t after converting the remaining
functions to pass raddr and prot without the context struct.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Pass raddr and prot in function parameters instead
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
There is already a hash32_bat_prot() function that does most if this
and the rest can be inlined. Export hash32_bat_prot() and rename it to
ppc_hash32_bat_prot() to match other functions and use it in
get_bat_6xx_tlb().
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Replace some BAT related constants with defines from mmu-hash32.h
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Rename parameter of get_bat_6xx_tlb() from virtual to eaddr to match
other functions.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Pass raddr and prot in function parameters instead.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Pass it as a function parameter and remove it from mmu_ctx_t.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
The ppc6xx_tlb_check() relies on the caller to initialise raddr field
in ctx. Move this init from the only caller into the function.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
This is used only once and can be inlined.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Add a function to get key bit from SR and use it instead of open coded
version.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Instead of passing around ptem in context use it once in the same
function so it can be removed from mmu_ctx_t.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
This function is only called once and we can make the caller simpler
by inlining it.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
In mmu6xx_get_physical_address() the switch handles all cases so the
default is never reached and can be dropped. Also group together cases
which just return -4.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
In mmu6xx_get_physical_address() tagtet_page_bits local is declared
only to use TARGET_PAGE_BITS once. Drop the unneeded variable.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
In mmu6xx_get_physical_address() ds is used as bool, declare it as
such. Also use named constant instead of hex value.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Pass it as a parameter instead. Also use named constants instead of
hex values when extracting bits from SR.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
This function is used only once, its return value is ignored and one
of its parameter is a return value from a previous call. It is better
to inline it in the caller and remove it.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Return hash value via a parameter and remove it from mmu_ctx.t.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
The eaddr field of mmu_ctx_t is set once but never used so can be
removed.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Invert conditions to avoid deep nested ifs and return early instead.
Remove some obvious comments that don't add more clarity.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Instead of using a local ret variable return directly and remove the
local.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
In ppc6xx_tlb_pte_check() the pp variable is used only once to pass it
to a function parameter with the same name. Remove the local and
inline the value. Also use named constant for the hex value to make it
clearer.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
In ppc6xx_tlb_pte_check() the pteh variable is used only once to
compare to the h parameter of the function. Inline its value and use
pteh name for the function parameter which is more descriptive.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
The ptev variable in ppc6xx_tlb_pte_check() is used only once and just
obfuscates an otherwise clear value. Get rid of it.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
The ptem variable in ppc6xx_tlb_pte_check() is used only once,
simplify by removing it as the value is already clear itself without
adding a local name for it.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
The mmask local variable is a less descriptive local name for a
constant. Drop it and use the constant directly in the two places it
is needed.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reorganise ppc_hash32_pp_prot() swapping the if legs so it does not
test for negative first and clean up to make it shorter. Also rename
it to ppc_hash32_prot().
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Updated many VSX instructions to use tcg_gen_qemu_ld/st_i128, instead of using
tcg_gen_qemu_ld/st_i64 consecutively.
Introduced functions {get,set}_vsr_full to facilitate the above & for future use.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Chinmay Rath <rathc@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Updated instructions {l, st}vx to use tcg_gen_qemu_ld/st_i128,
instead of using 64 bits loads/stores in succession.
Introduced functions {get, set}_avr_full in vmx-impl.c.inc to
facilitate the above, and potential future usage.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Chinmay Rath <rathc@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Those functions are used to ld/st data to and from Altivec registers,
in 64 bits chunks, and are only used in vmx-impl.c.inc file,
hence the clean-up movement.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Chinmay Rath <rathc@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Moving the following instructions to decodetree specification:
xvcmp{eq, gt, ge, ne}{s, d}p : XX3-form
The changes were verified by validating that the tcg-ops generated for those
instructions remain the same which were captured using the '-d in_asm,op' flag.
Signed-off-by: Chinmay Rath <rathc@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Moving the following instructions to decodetree specification:
lxv{b16, d2, h8, w4, ds, ws}x : X-form
stxv{b16, d2, h8, w4}x : X-form
The changes were verified by validating that the tcg-ops generated for those
instructions remain the same, which were captured using the '-d in_asm,op' flag.
Signed-off-by: Chinmay Rath <rathc@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Moving the following instructions to decodetree specification :
{l, st}xvl(l) : X-form
The changes were verified by validating that the tcg-ops generated by those
instructions remain the same, which were captured using the '-d in_asm,op' flag.
Also added a new function do_ea_calc_ra to calculate the effective address :
EA <- (RA == 0) ? 0 : GPR[RA], which is now used by the above-said insns,
and shall be used later by (p){lx, stx}vp insns.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Chinmay Rath <rathc@linux.ibm.com>
[np: Fix 32-bit build]
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Moving the following instructions to decodetree specification :
lxs{d, iwa, ibz, ihz, iwz, sp}x : X-form
stxs{d, ib, ih, iw, sp}x : X-form
The changes were verified by validating that the tcg-ops generated by those
instructions remain the same, which were captured using the '-d in_asm,op' flag.
Signed-off-by: Chinmay Rath <rathc@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Moving the following instructions to decodetree specification :
xxl{and, andc, or, orc, nor, xor, nand, eqv} : XX3-form
The changes were verified by validating that the tcg ops generated by those
instructions remain the same, which were captured with the '-d in_asm,op' flag.
Signed-off-by: Chinmay Rath <rathc@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Moving the following instructions to decodetree specification:
x{s, v}{add, sub, mul, div}{s, d}p : XX3-form
xs{max, min}dp, xv{max, min}{s, d}p : XX3-form
The changes were verfied by validating that the tcg ops generated by those
instructions remain the same, which were captured with the '-d in_asm,op' flag.
Signed-off-by: Chinmay Rath <rathc@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Moving PPC2_ISA300 flag check out of do_helper_XX3 method in vmx-impl.c.inc
so that the helper can be used with other instructions as well.
Signed-off-by: Chinmay Rath <rathc@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
No need for a full comparison; xor produces non-zero bits for QC just fine.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Chinmay Rath <rath.chinmay@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Moving the following instructions to decodetree specification :
v{add,sub}{u,s}{b,h,w}s : VX-form
The changes were verified by validating that the tcg ops generated by those
instructions remain the same, which were captured with the '-d in_asm,op' flag.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Chinmay Rath <rathc@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Recent POWER CPUs can operate in "LPAR per core" or "LPAR per thread"
modes. In per-core mode, some SPRs and IPI doorbells are shared between
threads in a core. In per-thread mode, supervisor and user state is
not shared between threads.
OpenPOWER systems after POWER8 use LPAR per thread mode, and it is
required for KVM. Enterprise systems use LPAR per core mode, as they
partition the machine by core.
Implement a lpar-per-core machine option for powernv machines. This
is fixed true for POWER8 machines, and defaults off for P9 and P10.
With this change, powernv8 SMT now works sufficiently to run Linux,
with a single socket. Multi-threaded KVM guests still have problems,
as does multi-socket Linux boot.
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
POWER10 has a quirk in its ChipTOD addressing that requires the even
small-core to be selected even when programming the odd small-core.
This allows skiboot chiptod init to run in big-core mode.
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Power9 CPUs have a core thread state register accessible via SPRC/SPRD
indirect registers. This register includes a bit for big-core mode,
which skiboot requires.
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
The decision to branch out to a slower SMT path in instruction
emulation will become a bit more complicated with the way that
"big-core" topology that will be implemented in subsequent changes.
Hide these details from the wider CPU emulation code with a bool
has_smt_siblings flag that can be set by machine initialisation.
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Add helpers for TCG code to determine if there are SMT siblings
sharing per-core and per-lpar registers. This simplifies the
callers and makes SMT register topology simpler to modify with
later changes.
Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
The way SMT thread siblings are matched is clunky, using hard-coded
logic that checks the PIR SPR.
Change that to use a new core_index variable in the CPUPPCState,
where all siblings have the same core_index. CPU realize routines have
flexibility in setting core/sibling topology.
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
SPRC/SPRD were recently added to all BookS CPUs supported, but
they are only tested on POWER9 and POWER10, so restrict them to
those CPUs.
SPR indirect scratch registers presently replicated per-CPU like
SMT SPRs, but the PnvCore is a better place for them since they
are restricted to P9/P10.
Also add SPR indirect read access to core thread state for POWER9
since skiboot accesses that when booting to check for big-core
mode.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
The timebase state machine is per per-core state and can be driven
by any thread in the core. It is currently implemented as a hack
where the state is in a CPU structure and only thread 0's state is
accessed by the chiptod, which limits programming the timebase
side of the state machine to thread 0 of a core.
Move the state out into PnvCore and share it among all threads.
Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
POWER8 (ISA v2.07S) introduced the doorbell facility, the msgsnd
instruction behaved mostly like msgsndp, it was addressed by TIR
and could only send interrupts between threads on the core.
ISA v3.0 changed msgsnd to be addressed by PIR and can interrupt
any thread in the system.
msgsnd only implements the v3.0 semantics, which can make
multi-threaded POWER8 hang when booting Linux (due to IPIs
failing). This change adds v2.07 semantics.
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
The patch enables HASHPKEYR migration by hooking with the
"KVM one reg" ID KVM_REG_PPC_HASHPKEYR.
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
The patch enables HASHKEYR migration by hooking with the
"KVM one reg" ID KVM_REG_PPC_HASHKEYR.
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>