The device control API was added in 2013, assume that it is present.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20241024113126.44343-1-pbonzini@redhat.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Pass the stage size to step function callback, otherwise do_setm
would hang when size is larger then page size because stage size
would underflow. This fix changes do_setm to be more inline with
do_setp.
Cc: qemu-stable@nongnu.org
Fixes: 0e92818887 ("target/arm: Implement the SET* instructions")
Signed-off-by: Ido Plat <ido.plat1@ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241025024909.799989-1-ido.plat1@ibm.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In regime_is_user() we assert if we're passed an ARMMMUIdx_E10_*
mmuidx value. This used to make sense because we only used this
function in ptw.c and would never use it on this kind of stage 1+2
mmuidx, only for an individual stage 1 or stage 2 mmuidx.
However, when we implemented FEAT_E0PD we added a callsite in
aa64_va_parameters(), which means this can now be called for
stage 1+2 mmuidx values if the guest sets the TCG_ELX.{E0PD0,E0PD1}
bits to enable use of the feature. This will then result in
an assertion failure later, for instance on a TLBI operation:
#6 0x00007ffff6d0e70f in g_assertion_message_expr
(domain=0x0, file=0x55555676eeba "../../target/arm/internals.h", line=978, func=0x555556771d48 <__func__.5> "regime_is_user", expr=<optimised out>)
at ../../../glib/gtestutils.c:3279
#7 0x0000555555f286d2 in regime_is_user (env=0x555557f2fe00, mmu_idx=ARMMMUIdx_E10_0) at ../../target/arm/internals.h:978
#8 0x0000555555f3e31c in aa64_va_parameters (env=0x555557f2fe00, va=18446744073709551615, mmu_idx=ARMMMUIdx_E10_0, data=true, el1_is_aa32=false)
at ../../target/arm/helper.c:12048
#9 0x0000555555f3163b in tlbi_aa64_get_range (env=0x555557f2fe00, mmuidx=ARMMMUIdx_E10_0, value=106721347371041) at ../../target/arm/helper.c:5214
#10 0x0000555555f317e8 in do_rvae_write (env=0x555557f2fe00, value=106721347371041, idxmap=21, synced=true) at ../../target/arm/helper.c:5260
#11 0x0000555555f31925 in tlbi_aa64_rvae1is_write (env=0x555557f2fe00, ri=0x555557fbeae0, value=106721347371041) at ../../target/arm/helper.c:5302
#12 0x0000555556036f8f in helper_set_cp_reg64 (env=0x555557f2fe00, rip=0x555557fbeae0, value=106721347371041) at ../../target/arm/tcg/op_helper.c:965
Since we do know whether these mmuidx values are for usermode
or not, we can easily make regime_is_user() handle them:
ARMMMUIdx_E10_0 is user, and the other two are not.
Cc: qemu-stable@nongnu.org
Fixes: e4c93e44ab ("target/arm: Implement FEAT_E0PD")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20241017172331.822587-1-peter.maydell@linaro.org
Currently we store the FPSR cumulative exception bits in the
float_status fields, and use env->vfp.fpsr only for the NZCV bits.
(The QC bit is stored in env->vfp.qc[].)
This works for TCG, but if QEMU was built without CONFIG_TCG (i.e.
with KVM support only) then we use the stub versions of
vfp_get_fpsr_from_host() and vfp_set_fpsr_to_host() which do nothing,
throwing away the cumulative exception bit state. The effect is that
if the FPSR state is round-tripped from KVM to QEMU then we lose the
cumulative exception bits. In particular, this will happen if the VM
is migrated. There is no user-visible bug when using KVM with a QEMU
binary that was built with CONFIG_TCG.
Fix this by always storing the cumulative exception bits in
env->vfp.fpsr. If we are using TCG then we may also keep pending
cumulative exception information in the float_status fields, so we
continue to fold that in on reads.
This change will also be helpful for implementing FEAT_AFP later,
because that includes a feature where in some situations we want to
cause input denormals to be flushed to zero without affecting the
existing state of the FPSR.IDC bit, so we need a place to store IDC
which is distinct from the various float_status fields.
(Note for stable backports: the bug goes back to 4a15527c9f but
this code was refactored in commits ea8618382aba..a8ab8706d4cc461, so
fixing it in branches without those refactorings will mean either
backporting the refactor or else implementing a conceptually similar
fix for the old code.)
Cc: qemu-stable@nongnu.org
Fixes: 4a15527c9f ("target/arm/vfp_helper: Restrict the SoftFloat use to TCG")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241011162401.3672735-1-peter.maydell@linaro.org
Extend the 'mte' property for the virt machine to cover KVM as
well. For KVM, we don't allocate tag memory, but instead enable
the capability.
If MTE has been enabled, we need to disable migration, as we do not
yet have a way to migrate the tags as well. Therefore, MTE will stay
off with KVM unless requested explicitly.
[gankulkarni: This patch is rework of commit b320e21c48
which broke TCG since it made the TCG -cpu max
report the presence of MTE to the guest even if the board hadn't
enabled MTE by wiring up the tag RAM. This meant that if the guest
then tried to use MTE QEMU would segfault accessing the
non-existent tag RAM.]
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Gustavo Romero <gustavo.romero@linaro.org>
Signed-off-by: Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com>
Message-id: 20241008114302.4855-1-gankulkarni@os.amperecomputing.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Because virtio-scsi type devices use a non-architected IPLB pbt code they cannot
be set and stored normally. Instead, the IPLB must be rebuilt during re-ipl.
As s390x does not natively support multiple boot devices, the devno field is
used to store the position in the boot order for the device.
Handling the rebuild as part of DIAG308 removes the need to check the devices
for invalid IPLBs later in the IPL.
Signed-off-by: Jared Rossi <jrossi@linux.ibm.com>
Acked-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20241020012953.1380075-17-jrossi@linux.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
This argument is no longer used.
Suggested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20241013184733.1423747-4-richard.henderson@linaro.org>
The probe_access_full_mmu function was designed for this purpose,
and does not report the memory operation event to plugins.
Cc: qemu-stable@nongnu.org
Fixes: 6d03226b42 ("plugins: force slow path when plugins instrument memory ops")
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20241013184733.1423747-3-richard.henderson@linaro.org>
When translating virtual to physical address with a guest CPU that
supports nested paging (NPT), we need to perform every page table walk
access indirectly through the NPT, which we correctly do.
However, we treat real mode (no page table walk) special: In that case,
we currently just skip any walks and translate VA -> PA. With NPT
enabled, we also need to then perform NPT walk to do GVA -> GPA -> HPA
which we fail to do so far.
The net result of that is that TCG VMs with NPT enabled that execute
real mode code (like SeaBIOS) end up with GPA==HPA mappings which means
the guest accesses host code and data. This typically shows as failure
to boot guests.
This patch changes the page walk logic for NPT enabled guests so that we
always perform a GVA -> GPA translation and then skip any logic that
requires an actual PTE.
That way, all remaining logic to walk the NPT stays and we successfully
walk the NPT in real mode.
Cc: qemu-stable@nongnu.org
Fixes: fe441054bb ("target-i386: Add NPT support")
Signed-off-by: Alexander Graf <graf@amazon.com>
Reported-by: Eduard Vlad <evlad@amazon.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20240921085712.28902-1-graf@amazon.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The error message for a "stepping" value that is out of bounds is a
bit odd:
$ qemu-system-x86_64 -cpu qemu64,stepping=16
qemu-system-x86_64: can't apply global qemu64-x86_64-cpu.stepping=16: Property .stepping doesn't take value 16 (minimum: 0, maximum: 15)
The "can't apply global" part is an unfortunate artifact of -cpu's
implementation. Left for another day.
The remainder feels overly verbose. Change it to
qemu64-x86_64-cpu: can't apply global qemu64-x86_64-cpu.stepping=16: parameter 'stepping' can be at most 15
Likewise for "family", "model", and "tsc-frequency".
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-ID: <20241010150144.986655-6-armbru@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Properties "family", "model", and "stepping" are visited as signed
integers. They are backed by bits in CPUX86State member
@cpuid_version. The code to extract and insert these bits mixes
signed and unsigned. Not actually wrong, but avoiding such mixing is
good practice.
Visit them as unsigned integers instead.
This adds a few mildly ugly cast in arguments of error_setg(). The
next commit will get rid of them again.
Property "tsc-frequency" is also visited as signed integer. The value
ultimately flows into the kernel, where it is 31 bits unsigned. The
QEMU code freely mixes int, uint32_t, int64_t. I elect not to attempt
draining this swamp today.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-ID: <20241010150144.986655-5-armbru@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
* target/i386: Fixes for IN and OUT with REX prefix
* target/i386: New CPUID features and logic fixes
* target/i386: Add support save/load HWCR MSR
* target/i386: Move more instructions to new decoder; separate decoding
and IR generation
* target/i386/tcg: Use DPL-level accesses for interrupts and call gates
* accel/kvm: perform capability checks on VM file descriptor when necessary
* accel/kvm: dynamically sized kvm memslots array
* target/i386: fixes for Hyper-V
* docs/system: Add recommendations to Hyper-V enlightenments doc
-----BEGIN PGP SIGNATURE-----
iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmcRTIoUHHBib256aW5p
QHJlZGhhdC5jb20ACgkQv/vSX3jHroMCewf8DnZbz7/0beql2YycrdPJZ3xnmfWW
JenWKIThKHGWRTW2ODsac21n0TNXE0vsOYjw/Z/dNLO+72sLcqvmEB18+dpHAD2J
ltb8OvuROc3nn64OEi08qIj7JYLmJ/osroI+6NnZrCOHo8nCirXoCHB7ZPqAE7/n
yDnownWaduXmXt3+Vs1mpqlBklcClxaURDDEQ8CGsxjC3jW03cno6opJPZpJqk0t
6aX92vX+3lNhIlije3QESsDX0cP1CFnQmQlNNg/xzk+ZQO+vSRrPV+A/N9xf8m1b
HiaCrlBWYef/sLgOHziOSrJV5/N8W0GDEVYDmpEswHE81BZxrOTZLxqzWw==
=qwfc
-----END PGP SIGNATURE-----
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
* tcg/s390x: Fix for TSTEQ/TSTNE
* target/i386: Fixes for IN and OUT with REX prefix
* target/i386: New CPUID features and logic fixes
* target/i386: Add support save/load HWCR MSR
* target/i386: Move more instructions to new decoder; separate decoding
and IR generation
* target/i386/tcg: Use DPL-level accesses for interrupts and call gates
* accel/kvm: perform capability checks on VM file descriptor when necessary
* accel/kvm: dynamically sized kvm memslots array
* target/i386: fixes for Hyper-V
* docs/system: Add recommendations to Hyper-V enlightenments doc
# -----BEGIN PGP SIGNATURE-----
#
# iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmcRTIoUHHBib256aW5p
# QHJlZGhhdC5jb20ACgkQv/vSX3jHroMCewf8DnZbz7/0beql2YycrdPJZ3xnmfWW
# JenWKIThKHGWRTW2ODsac21n0TNXE0vsOYjw/Z/dNLO+72sLcqvmEB18+dpHAD2J
# ltb8OvuROc3nn64OEi08qIj7JYLmJ/osroI+6NnZrCOHo8nCirXoCHB7ZPqAE7/n
# yDnownWaduXmXt3+Vs1mpqlBklcClxaURDDEQ8CGsxjC3jW03cno6opJPZpJqk0t
# 6aX92vX+3lNhIlije3QESsDX0cP1CFnQmQlNNg/xzk+ZQO+vSRrPV+A/N9xf8m1b
# HiaCrlBWYef/sLgOHziOSrJV5/N8W0GDEVYDmpEswHE81BZxrOTZLxqzWw==
# =qwfc
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 17 Oct 2024 18:42:34 BST
# gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg: issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (26 commits)
target/i386: Use only 16 and 32-bit operands for IN/OUT
accel/kvm: check for KVM_CAP_MEMORY_ATTRIBUTES on vm
accel/kvm: check for KVM_CAP_MULTI_ADDRESS_SPACE on vm
accel/kvm: check for KVM_CAP_READONLY_MEM on VM
target/i386/tcg: Use DPL-level accesses for interrupts and call gates
KVM: Rename KVMState->nr_slots to nr_slots_max
KVM: Rename KVMMemoryListener.nr_used_slots to nr_slots_used
KVM: Define KVM_MEMSLOTS_NUM_MAX_DEFAULT
KVM: Dynamic sized kvm memslots array
target/i386: assert that cc_op* and pc_save are preserved
target/i386: list instructions still in translate.c
target/i386: do not check PREFIX_LOCK in old-style decoder
target/i386: convert CMPXCHG8B/CMPXCHG16B to new decoder
target/i386: decode address before going back to translate.c
target/i386: convert bit test instructions to new decoder
tcg/s390x: fix constraint for 32-bit TSTEQ/TSTNE
docs/system: Add recommendations to Hyper-V enlightenments doc
target/i386: Make sure SynIC state is really updated before KVM_RUN
target/i386: Exclude 'hv-syndbg' from 'hv-passthrough'
target/i386: Fix conditional CONFIG_SYNDBG enablement
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Stack accesses should be explicit and use the privilege level of the
target stack. This ensures that SMAP is not applied when the target
stack is in ring 3.
This fixes a bug wherein i386/tcg assumed that an interrupt return, or a
far call using the CALL or JMP instruction, was always going from kernel
or user mode to kernel mode when using a call gate. This assumption is
violated if the call gate has a DPL that is greater than 0.
Analyzed-by: Robert R. Henry <rrh.henry@gmail.com>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/249
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Now all decoding has been done before any code generation.
There is no need anymore to save and restore cc_op* and
pc_save but, for the time being, assert that this is indeed
the case.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Group them so that it is easier to figure out which two-byte opcodes to
tackle together.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
It is already checked before getting there.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The gen_cmpxchg8b and gen_cmpxchg16b functions even have the correct
prototype already; the only thing that needs to be done is removing the
gen_lea_modrm() call.
This moves the last LOCK-enabled instructions to the new decoder. It is
now possible to assume that gen_multi0F is called only after checking
that PREFIX_LOCK was not specified.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
There are now relatively few unconverted opcodes in translate.c (there
are 13 of them including 8 for x87), and all of them have the same
format with a mod/rm byte and no immediate. A good next step is
to remove the early bail out to disas_insn_x87/disas_insn_old,
instead giving these legacy translator functions the same prototype
as the other gen_* functions.
To do this, the X86DecodeInsn can be passed down to the places that
used to fetch address bytes from the instruction stream. To make
sure that everything is done cleanly, the CPUX86State* argument is
removed.
As part of the unification, the gen_lea_modrm() name is now free,
so rename gen_load_ea() to gen_lea_modrm(). This is as good a name
and it makes the changes to translate.c easier to review.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Code generation was rewritten; it reuses the same trick to use the
CC_OP_SAR values for cc_op, but it tries to use CC_OP_ADCX or CC_OP_ADCOX
instead of CC_OP_EFLAGS. This is a tiny bit more efficient in the
common case where only CF is checked in the resulting flags.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-----BEGIN PGP SIGNATURE-----
iLMEAAEKAB0WIQS4/x2g0v3LLaCcbCxAov/yOSY+3wUCZw91kQAKCRBAov/yOSY+
3+RyA/9vpqCesEBch5mzrazO4MT2IxeN2bstF8mY+EyfEwK7Ocg+esRBsigWw56k
y6RDyCzHg200GL9TC8bJ/nMiMJjXrahhHRPVs8AADazMzX/Ys7E7ntvUUnqqANh6
ZX8fzNJMKW6qeUVrCIwCC7E+KjfNu32dcxbXCF4mZsehIumpUQ==
=uk+a
-----END PGP SIGNATURE-----
Merge tag 'pull-loongarch-20241016' of https://gitlab.com/gaosong/qemu into staging
pull-loongarch-20241016
# -----BEGIN PGP SIGNATURE-----
#
# iLMEAAEKAB0WIQS4/x2g0v3LLaCcbCxAov/yOSY+3wUCZw91kQAKCRBAov/yOSY+
# 3+RyA/9vpqCesEBch5mzrazO4MT2IxeN2bstF8mY+EyfEwK7Ocg+esRBsigWw56k
# y6RDyCzHg200GL9TC8bJ/nMiMJjXrahhHRPVs8AADazMzX/Ys7E7ntvUUnqqANh6
# ZX8fzNJMKW6qeUVrCIwCC7E+KjfNu32dcxbXCF4mZsehIumpUQ==
# =uk+a
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 16 Oct 2024 09:13:05 BST
# gpg: using RSA key B8FF1DA0D2FDCB2DA09C6C2C40A2FFF239263EDF
# gpg: Good signature from "Song Gao <m17746591750@163.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: B8FF 1DA0 D2FD CB2D A09C 6C2C 40A2 FFF2 3926 3EDF
* tag 'pull-loongarch-20241016' of https://gitlab.com/gaosong/qemu:
hw/loongarch/fw_cfg: Build in common_ss[]
hw/loongarch/virt: Remove unnecessary 'cpu.h' inclusion
target/loongarch: Avoid bits shift exceeding width of bool type
hw/loongarch/virt: Add FDT table support with acpi ged pm register
acpi: ged: Add macro for acpi sleep control register
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
'hyperv_synic' test from KVM unittests was observed to be flaky on certain
hardware (hangs sometimes). Debugging shows that the problem happens in
hyperv_sint_route_new() when the test tries to set up a new SynIC
route. The function bails out on:
if (!synic->sctl_enabled) {
goto cleanup;
}
but the test writes to HV_X64_MSR_SCONTROL just before it starts
establishing SINT routes. Further investigation shows that
synic_update() (called from async_synic_update()) happens after the SINT
setup attempt and not before. Apparently, the comment before
async_safe_run_on_cpu() in kvm_hv_handle_exit() does not correctly describe
the guarantees async_safe_run_on_cpu() gives. In particular, async worked
added to a CPU is actually processed from qemu_wait_io_event() which is not
always called before KVM_RUN, i.e. kvm_cpu_exec() checks whether an exit
request is pending for a CPU and if not, keeps running the vCPU until it
meets an exit it can't handle internally. Hyper-V specific MSR writes are
not automatically trigger an exit.
Fix the issue by simply raising an exit request for the vCPU where SynIC
update was queued. This is not a performance critical path as SynIC state
does not get updated so often (and async_safe_run_on_cpu() is a big hammer
anyways).
Reported-by: Jan Richter <jarichte@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Link: https://lore.kernel.org/r/20240917160051.2637594-4-vkuznets@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Windows with Hyper-V role enabled doesn't boot with 'hv-passthrough' when
no debugger is configured, this significantly limits the usefulness of the
feature as there's no support for subtracting Hyper-V features from CPU
flags at this moment (e.g. "-cpu host,hv-passthrough,-hv-syndbg" does not
work). While this is also theoretically fixable, 'hv-syndbg' is likely
very special and unneeded in the default set. Genuine Hyper-V doesn't seem
to enable it either.
Introduce 'skip_passthrough' flag to 'kvm_hyperv_properties' and use it as
one-off to skip 'hv-syndbg' when enabling features in 'hv-passthrough'
mode. Note, "-cpu host,hv-passthrough,hv-syndbg" can still be used if
needed.
As both 'hv-passthrough' and 'hv-syndbg' are debug features, the change
should not have any effect on production environments.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Link: https://lore.kernel.org/r/20240917160051.2637594-3-vkuznets@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Putting HYPERV_FEAT_SYNDBG entry under "#ifdef CONFIG_SYNDBG" in
'kvm_hyperv_properties' array is wrong: as HYPERV_FEAT_SYNDBG is not
the highest feature number, the result is an empty (zeroed) entry in
the array (and not a skipped entry!). hyperv_feature_supported() is
designed to check that all CPUID bits are set but for a zeroed
feature in 'kvm_hyperv_properties' it returns 'true' so QEMU considers
HYPERV_FEAT_SYNDBG as always supported, regardless of whether KVM host
actually supports it.
To fix the issue, leave HYPERV_FEAT_SYNDBG's definition in
'kvm_hyperv_properties' array, there's nothing wrong in having it defined
even when 'CONFIG_SYNDBG' is not set. Instead, put "hv-syndbg" CPU property
under '#ifdef CONFIG_SYNDBG' to alter the existing behavior when the flag
is silently skipped in !CONFIG_SYNDBG builds.
Leave an 'assert' sentinel in hyperv_feature_supported() making sure there
are no 'holes' or improperly defined features in 'kvm_hyperv_properties'.
Fixes: d8701185f4 ("hw: hyperv: Initial commit for Synthetic Debugging device")
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Link: https://lore.kernel.org/r/20240917160051.2637594-2-vkuznets@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
KVM commit 191c8137a939 ("x86/kvm: Implement HWCR support")
introduced support for emulating HWCR MSR.
Add support for QEMU to save/load this MSR for migration purposes.
Signed-off-by: Gao Shiyuan <gaoshiyuan@baidu.com>
Signed-off-by: Wang Liang <wangliang44@baidu.com>
Link: https://lore.kernel.org/r/20241009095109.66843-1-gaoshiyuan@baidu.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Following 5 bits in CPUID.7.2.EDX are supported by KVM. Add their
supports in QEMU. Each of them indicates certain bits of IA32_SPEC_CTRL
are supported. Those bits can control CPU speculation behavior which can
be used to defend against side-channel attacks.
bit0: intel-psfd
if 1, indicates bit 7 of the IA32_SPEC_CTRL MSR is supported. Bit 7 of
this MSR disables Fast Store Forwarding Predictor without disabling
Speculative Store Bypass
bit1: ipred-ctrl
If 1, indicates bits 3 and 4 of the IA32_SPEC_CTRL MSR are supported.
Bit 3 of this MSR enables IPRED_DIS control for CPL3. Bit 4 of this
MSR enables IPRED_DIS control for CPL0/1/2
bit2: rrsba-ctrl
If 1, indicates bits 5 and 6 of the IA32_SPEC_CTRL MSR are supported.
Bit 5 of this MSR disables RRSBA behavior for CPL3. Bit 6 of this MSR
disables RRSBA behavior for CPL0/1/2
bit3: ddpd-u
If 1, indicates bit 8 of the IA32_SPEC_CTRL MSR is supported. Bit 8 of
this MSR disables Data Dependent Prefetcher.
bit4: bhi-ctrl
if 1, indicates bit 10 of the IA32_SPEC_CTRL MSR is supported. Bit 10
of this MSR enables BHI_DIS_S behavior.
Signed-off-by: Chao Gao <chao.gao@intel.com>
Link: https://lore.kernel.org/r/20240919051011.118309-1-chao.gao@intel.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When user sets tsc-frequency explicitly, the invtsc feature is actually
migratable because the tsc-frequency is supposed to be fixed during the
migration.
See commit d99569d9d8 ("kvm: Allow invtsc migration if tsc-khz
is set explicitly") for referrence.
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Link: https://lore.kernel.org/r/20240814075431.339209-10-xiaoyao.li@intel.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
- CPUID.(EAX=07H,ECX=0H):EBX[bit 6]: x87 FPU Data Pointer updated only
on x87 exceptions if 1.
- CPUID.(EAX=07H,ECX=0H):EBX[bit 13]: Deprecates FPU CS and FPU DS
values if 1. i.e., X87 FCS and FDS are always zero.
Define names for them so that they can be exposed to guest with -cpu host.
Also define the bit field MACROs so that named cpu models can add it as
well in the future.
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Link: https://lore.kernel.org/r/20240814075431.339209-3-xiaoyao.li@intel.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Currently, QEMU always constructs a all-zero CPUID entry for
CPUID[0xD 0x3f].
It's meaningless to construct such a leaf as the end of leaf 0xD. Rework
the logic of how subleaves of 0xD are constructed to get rid of such
all-zero value of subleaf 0x3f.
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Link: https://lore.kernel.org/r/20240814075431.339209-2-xiaoyao.li@intel.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Variable env->cf[i] is defined as bool type, it is treated as int type
with shift operation. However the max possible width is 56 for the shift
operation, exceeding the width of int type. And there is existing api
read_fcc() which is converted to u64 type with bitwise shift, it can be
used to dump fp registers into coredump note segment.
Resolves: Coverity CID 1561133
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240914064645.2099169-1-maobibo@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
mips_cpu_create_with_clock() creates a vCPU. Pass it the vCPU
endianness requested by argument. Update the board call sites.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20241010215015.44326-17-philmd@linaro.org>
Add the "big-endian" property and set the CP0C0_BE bit in CP0_Config0.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20241010215015.44326-15-philmd@linaro.org>
Directly use tcg_constant_tl() for constant integer, this
save a call to tcg_gen_movi_tl(), often saving a temp register.
Most of the places found using the following Coccinelle spatch script:
@@
identifier tmp;
constant val;
@@
* TCGv tmp = tcg_temp_new();
...
* tcg_gen_movi_tl(tmp, val);
@@
identifier tmp;
int val;
@@
* TCGv tmp = tcg_temp_new();
...
* tcg_gen_movi_i64(tmp, val);
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20241004202621.4321-2-philmd@linaro.org>
Replace tcg_gen_movi_tl() + gen_op_addr_add() by a single
gen_op_addr_addi() call.
gen_op_addr_addi() calls tcg_gen_addi_tl() which might
optimize if the immediate is zero.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20241010215015.44326-13-philmd@linaro.org>
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20241010215015.44326-12-philmd@linaro.org>
Introduce mo_endian() which returns the endian MemOp
corresponding to the vCPU DisasContext.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20241010215015.44326-10-philmd@linaro.org>
MEMOP_IDX() is unused since commit 948f88661c ("target/mips:
Use cpu_*_data_ra for msa load/store"), remove it.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20241014232235.51988-1-philmd@linaro.org>
In commit 6d0cad1259 ("target/mips: Finish conversion to
tcg_gen_qemu_{ld,st}_*") we renamed the argument of the user
definition. Rename the system part for coherency. Since the
argument is ignored, prefix with 'ignored_'.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20241010215015.44326-9-philmd@linaro.org>
Extract the implicit MO_TE definition in order to replace
it by runtime variable in the next commit.
Mechanical change using:
$ for n in UW UL UQ UO SW SL SQ; do \
sed -i -e "s/MO_TE$n/MO_TE | MO_$n/" \
$(git grep -l MO_TE$n target/mips); \
done
manually remove superfluous parenthesis in nanoMIPS gen_save().
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20241010215015.44326-8-philmd@linaro.org>
Instead of swapping the reversed target endianness
using MO_BSWAP, directly return the correct endianness.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20241010215015.44326-7-philmd@linaro.org>
Functions are easier to rework than macros. Besides,
there is no gain here in inlining these.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20241010215015.44326-6-philmd@linaro.org>
Replace compile-time MO_TE evaluation by runtime mo_endian_env()
one, which expand target endianness from vCPU env.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20241010215015.44326-5-philmd@linaro.org>
Introduce mo_endian_env() which returns the endian
MemOp corresponding to the vCPU env.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20241010215015.44326-4-philmd@linaro.org>
Methods using the 'cpu_' prefix usually take a (Arch)CPUState
argument. Since this method takes a DisasContext argument,
rename it as disas_is_bigendian().
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20241010215015.44326-3-philmd@linaro.org>
In order to re-use cpu_is_bigendian(), declare it on "internal.h"
after renaming it as mips_env_is_bigendian().
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20241010215015.44326-2-philmd@linaro.org>
Directly use tcg_constant_tl() for constant integer,
this save a call to tcg_gen_movi_tl() and a temp register.
Inspired-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20241004202621.4321-4-philmd@linaro.org>
Directly use tcg_constant_tl() for constant integer,
this save a call to tcg_gen_movi_tl().
Inspired-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20241004202621.4321-3-philmd@linaro.org>
The TriCore architecture uses little endianness. Directly use
the little-endian LD/ST API.
Mechanical change using:
$ end=le; \
for acc in uw w l q tul; do \
sed -i -e "s/ld${acc}_p(/ld${acc}_${end}_p(/" \
-e "s/st${acc}_p(/st${acc}_${end}_p(/" \
$(git grep -wlE '(ld|st)t?u?[wlq]_p' target/tricore/); \
done
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20241004163042.85922-15-philmd@linaro.org>
The LoongArch architecture uses little endianness. Directly
use the little-endian LD/ST API.
Mechanical change using:
$ end=le; \
for acc in uw w l q tul; do \
sed -i -e "s/ld${acc}_p(/ld${acc}_${end}_p(/" \
-e "s/st${acc}_p(/st${acc}_${end}_p(/" \
$(git grep -wlE '(ld|st)t?u?[wlq]_p' target/loongarch/); \
done
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20241004163042.85922-13-philmd@linaro.org>
The AVR architecture uses little endianness. Directly use
the little-endian LD/ST API.
Mechanical change using:
$ end=le; \
for acc in uw w l q tul; do \
sed -i -e "s/ld${acc}_p(/ld${acc}_${end}_p(/" \
-e "s/st${acc}_p(/st${acc}_${end}_p(/" \
$(git grep -wlE '(ld|st)t?u?[wlq]_p' target/avr/); \
done
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20241004163042.85922-11-philmd@linaro.org>
The Hexagon architecture uses little endianness. Directly use
the little-endian LD/ST API.
Mechanical change using:
$ end=le; \
for acc in uw w l q tul; do \
sed -i -e "s/ld${acc}_p(/ld${acc}_${end}_p(/" \
-e "s/st${acc}_p(/st${acc}_${end}_p(/" \
$(git grep -wlE '(ld|st)t?u?[wlq]_p' target/hexagon/); \
done
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20241004163042.85922-8-philmd@linaro.org>
The Alpha architecture uses little endianness. Directly use
the little-endian LD/ST API.
Mechanical change using:
$ end=le; \
for acc in uw w l q tul; do \
sed -i -e "s/ld${acc}_p(/ld${acc}_${end}_p(/" \
-e "s/st${acc}_p(/st${acc}_${end}_p(/" \
$(git grep -wlE '(ld|st)t?u?[wlq]_p' target/alpha/); \
done
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20241004163042.85922-7-philmd@linaro.org>
The Alpha target is only built for 64-bit.
Using ldtul_p() is pointless, replace by ldq_p().
Mechanical change doing:
$ sed -i -e 's/ldtul_p/ldq_p/' $(git grep -wl ldtul_p target/alpha/)
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20241004163042.85922-4-philmd@linaro.org>
The Hexagon target is only built for 32-bit.
Using ldtul_p() is pointless, replace by ldl_p().
Mechanical change doing:
$ sed -i -e 's/ldtul_p/ldl_p/' \
$(git grep -wl ldtul_p target/hexagon/)
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20241004163042.85922-3-philmd@linaro.org>
Now that we have the MemOp for the access, we can order
the alignment fault caused by memory type before the
permission fault for the page.
For subsequent page hits, permission and stage 2 checks
are known to pass, and so the TLB_CHECK_ALIGNED fault
raised in generic code is not mis-ordered.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Fill in the tlb_fill_align hook. Handle alignment not due to
memory type, since that's no longer handled by generic code.
Pass memop to get_phys_addr.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Determine cache attributes, and thence Device vs Normal memory,
earlier in the function. We have an existing regime_is_stage2
if block into which this can be slotted.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Pass the value through from get_phys_addr_nogpc.
Reviewed-by: Helge Deller <deller@gmx.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Pass memop through get_phys_addr_twostage with its
recursion with get_phys_addr_nogpc.
Reviewed-by: Helge Deller <deller@gmx.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Zero is the safe do-nothing value for callers to use.
Pass the value through from get_phys_addr_gpc and
get_phys_addr_with_space_nogpc.
Reviewed-by: Helge Deller <deller@gmx.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Zero is the safe do-nothing value for callers to use.
Pass the value through from get_phys_addr.
Reviewed-by: Helge Deller <deller@gmx.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Zero is the safe do-nothing value for callers to use.
Reviewed-by: Helge Deller <deller@gmx.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Zero is the safe do-nothing value for callers to use.
Reviewed-by: Helge Deller <deller@gmx.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Convert hppa_cpu_tlb_fill to hppa_cpu_tlb_fill_align so that we
can recognize alignment exceptions in the correct priority order.
Resolves: https://bugzilla.kernel.org/show_bug.cgi?id=219339
Tested-by: Helge Deller <deller@gmx.de>
Reviewed-by: Helge Deller <deller@gmx.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
In Chapter 5, Interruptions, the group 3 exceptions lists
"Unaligned data reference trap" has higher priority than
"Data memory break trap".
Reviewed-by: Helge Deller <deller@gmx.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Drop the 'else' so that ret is overridden with the
highest priority fault.
Fixes: d8bc138125 ("target/hppa: Implement PSW_X")
Reviewed-by: Helge Deller <deller@gmx.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
In Chapter 5, Interruptions, the group 3 exceptions lists
"Data memory access rights trap" in priority order ahead of
"Data memory protection ID trap".
Swap these checks in hppa_get_physical_address.
Reviewed-by: Helge Deller <deller@gmx.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Just add the argument, unused at this point.
Zero is the safe do-nothing value for all callers.
Reviewed-by: Helge Deller <deller@gmx.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Rename to use "memop_" prefix, like other functions
that operate on MemOp.
Reviewed-by: Helge Deller <deller@gmx.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Copy XML files describing orig_ax from GDB and glue them with
CPUX86State.orig_ax.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-ID: <20240912093012.402366-5-iii@linux.ibm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
i386 gdbstub handles both i386 and x86_64. Factor out two functions
for reading and writing registers without knowing their bitness.
While at it, simplify the TARGET_LONG_BITS == 32 case.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-ID: <20240912093012.402366-4-iii@linux.ibm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
It is used in a couple of places only, both within the same target.
Those can use the cflags just as well, so remove the separate field.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <20241010083641.1785069-1-pbonzini@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Returning a raw areg does not preserve the value if the areg
is subsequently modified. Fixes, e.g. "jsr (sp)", where the
return address is pushed before the branch.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2483
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240813000737.228470-1-richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
The S390X architecture uses big endianness. Directly use
the big-endian LD/ST API.
Mechanical change using:
$ end=be; \
for acc in uw w l q tul; do \
sed -i -e "s/ld${acc}_p(/ld${acc}_${end}_p(/" \
-e "s/st${acc}_p(/st${acc}_${end}_p(/" \
$(git grep -wlE '(ld|st)t?u?[wlq]_p' target/s390x/); \
done
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20241004163042.85922-24-philmd@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
The S390X target is only built for 64-bit.
Using ldtul_p() is pointless, replace by ldq_p().
Mechanical change doing:
$ sed -i -e 's/ldtul_p/ldq_p/' $(git grep -wl ldtul_p target/s390x/)
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20241004163042.85922-5-philmd@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
The M68K architecture uses big endianness. Directly use
the big-endian LD/ST API.
Mechanical change using:
$ end=be; \
for acc in uw w l q tul; do \
sed -i -e "s/ld${acc}_p(/ld${acc}_${end}_p(/" \
-e "s/st${acc}_p(/st${acc}_${end}_p(/" \
$(git grep -wlE '(ld|st)t?u?[wlq]_p' target/m68k/); \
done
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Thomas Huth <huth@tuxfamily.org>
Message-ID: <20241004163042.85922-19-philmd@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
* kvm: support for nested FRED
* tests/unit: fix warning when compiling test-nested-aio-poll with LTO
* kvm: refactoring of VM creation
* target/i386: expose IBPB-BRTYPE and SBPB CPUID bits to the guest
* hw/char: clean up serial
* remove virtfs-proxy-helper
* target/i386/kvm: Report which action failed in kvm_arch_put/get_registers
* qom: improvements to object_resolve_path*()
-----BEGIN PGP SIGNATURE-----
iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmb++MsUHHBib256aW5p
QHJlZGhhdC5jb20ACgkQv/vSX3jHroPVnwf/cdvfxvDm22tEdlh8vHlV17HtVdcC
Hw334M/3PDvbTmGzPBg26lzo4nFS6SLrZ8ETCeqvuJrtKzqVk9bI8ssZW5KA4ijM
nkxguRPHO8E6U33ZSucc+Hn56+bAx4I2X80dLKXJ87OsbMffIeJ6aHGSEI1+fKVh
pK7q53+Y3lQWuRBGhDIyKNuzqU4g+irpQwXOhux63bV3ADadmsqzExP6Gmtl8OKM
DylPu1oK7EPZumlSiJa7Gy1xBqL4Rc4wGPNYx2RVRjp+i7W2/Y1uehm3wSBw+SXC
a6b7SvLoYfWYS14/qCF4cBL3sJH/0f/4g8ZAhDDxi2i5kBr0/5oioDyE/A==
=/zo4
-----END PGP SIGNATURE-----
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
* pc: Add a description for the i8042 property
* kvm: support for nested FRED
* tests/unit: fix warning when compiling test-nested-aio-poll with LTO
* kvm: refactoring of VM creation
* target/i386: expose IBPB-BRTYPE and SBPB CPUID bits to the guest
* hw/char: clean up serial
* remove virtfs-proxy-helper
* target/i386/kvm: Report which action failed in kvm_arch_put/get_registers
* qom: improvements to object_resolve_path*()
# -----BEGIN PGP SIGNATURE-----
#
# iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmb++MsUHHBib256aW5p
# QHJlZGhhdC5jb20ACgkQv/vSX3jHroPVnwf/cdvfxvDm22tEdlh8vHlV17HtVdcC
# Hw334M/3PDvbTmGzPBg26lzo4nFS6SLrZ8ETCeqvuJrtKzqVk9bI8ssZW5KA4ijM
# nkxguRPHO8E6U33ZSucc+Hn56+bAx4I2X80dLKXJ87OsbMffIeJ6aHGSEI1+fKVh
# pK7q53+Y3lQWuRBGhDIyKNuzqU4g+irpQwXOhux63bV3ADadmsqzExP6Gmtl8OKM
# DylPu1oK7EPZumlSiJa7Gy1xBqL4Rc4wGPNYx2RVRjp+i7W2/Y1uehm3wSBw+SXC
# a6b7SvLoYfWYS14/qCF4cBL3sJH/0f/4g8ZAhDDxi2i5kBr0/5oioDyE/A==
# =/zo4
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 03 Oct 2024 21:04:27 BST
# gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg: issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (23 commits)
qom: update object_resolve_path*() documentation
qom: set *ambiguous on all paths
qom: rename object_resolve_path_type() "ambiguousp"
target/i386/kvm: Report which action failed in kvm_arch_put/get_registers
kvm: Allow kvm_arch_get/put_registers to accept Error**
accel/kvm: refactor dirty ring setup
minikconf: print error entirely on stderr
9p: remove 'proxy' filesystem backend driver
hw/char: Extract serial-mm
hw/char/serial.h: Extract serial-isa.h
hw: Remove unused inclusion of hw/char/serial.h
target/i386: Expose IBPB-BRTYPE and SBPB CPUID bits to the guest
kvm: refactor core virtual machine creation into its own function
kvm/i386: replace identity_base variable with a constant
kvm/i386: refactor kvm_arch_init and split it into smaller functions
kvm: replace fprintf with error_report()/printf() in kvm_init()
kvm/i386: fix return values of is_host_cpu_intel()
kvm/i386: make kvm_filter_msr() and related definitions private to kvm module
hw/i386/pc: Add a description for the i8042 property
tests/unit: remove block layer code from test-nested-aio-poll
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
# Conflicts:
# hw/arm/Kconfig
# hw/arm/pxa2xx.c
This is necessary to provide discernible error messages to the caller.
Signed-off-by: Julia Suvorova <jusual@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Link: https://lore.kernel.org/r/20240927104743.218468-2-jusual@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
According to AMD's Speculative Return Stack Overflow whitepaper (link
below), the hypervisor should synthesize the value of IBPB_BRTYPE and
SBPB CPUID bits to the guest.
Support for this is already present in the kernel with commit
e47d86083c66 ("KVM: x86: Add SBPB support") and commit 6f0f23ef76be
("KVM: x86: Add IBPB_BRTYPE support").
Add support in QEMU to expose the bits to the guest OS.
host:
# cat /sys/devices/system/cpu/vulnerabilities/spec_rstack_overflow
Mitigation: Safe RET
before (guest):
$ cpuid -l 0x80000021 -1 -r
0x80000021 0x00: eax=0x00000045 ebx=0x00000000 ecx=0x00000000 edx=0x00000000
^
$ cat /sys/devices/system/cpu/vulnerabilities/spec_rstack_overflow
Vulnerable: Safe RET, no microcode
after (guest):
$ cpuid -l 0x80000021 -1 -r
0x80000021 0x00: eax=0x18000045 ebx=0x00000000 ecx=0x00000000 edx=0x00000000
^
$ cat /sys/devices/system/cpu/vulnerabilities/spec_rstack_overflow
Mitigation: Safe RET
Reported-by: Fabian Vogt <fvogt@suse.de>
Link: https://www.amd.com/content/dam/amd/en/documents/corporate/cr/speculative-return-stack-overflow-whitepaper.pdf
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Link: https://lore.kernel.org/r/20240805202041.5936-1-farosas@suse.de
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
identity_base variable is first initialzied to address 0xfffbc000 and then
kvm_vm_set_identity_map_addr() overrides this value to address 0xfeffc000.
The initial address to which the variable was initialized was never used. Clean
everything up, placing 0xfeffc000 in a preprocessor constant.
Reported-by: Ani Sinha <anisinha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
kvm_arch_init() enables a lot of vm capabilities. Refactor them into separate
smaller functions. Energy MSR related operations also moved to its own
function. There should be no functional impact.
Signed-off-by: Ani Sinha <anisinha@redhat.com>
Link: https://lore.kernel.org/r/20240903124143.39345-2-anisinha@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* Add a property to set vl to ceil(AVL/2)
* Enable numamem testing for RISC-V
* Consider MISA bit choice in implied rule
* Fix the za64rs priv spec requirements
* Enable Bit Manip for OpenTitan Ibex CPU
* Fix the group bit setting of AIA with KVM
* Stop timer with infinite timecmp
* Add 'fcsr' register to QEMU log as a part of F extension
* Fix riscv64 build on musl libc
* Add preliminary textra trigger CSR functions
* RISC-V bsd-user support
* Respect firmware ELF entry point
* Add Svvptc extension support
* Fix masking of rv32 physical address
* Fix linking problem with semihosting disabled
* Fix IMSIC interrupt state updates
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEaukCtqfKh31tZZKWr3yVEwxTgBMFAmb83lYACgkQr3yVEwxT
gBNndBAAmh66yWt9TeTHlQ/rgBhx2nUMBbfICBWQyNGvPlslffwrNoLkh8jpkuiP
PD0RQArAAGeM09cgCZCu14JzIBmmNiGgUxsUnqOZvUw18uIlLFlpt/tiT7iGw/Xb
pfI7waF66/FPXBErY2yiw9/RGQLlkiGNBC9FNYrD/kCahf9MSIobv85tOgSQ2qjH
nOJ+UBN0TQ1x0Z5lJMj9Pzl1WDvelRnCkYI5nXg1heKG73Hm7GmHt99QpTV2Okqn
T3jFzEfMTQeHO4nC/X2pbaesE62K+mTg/FZpId2iV8lMCSm1zKof+xJ4boKM9RB2
0HjXAT+MveLuLUNtgfbV9C+VgU25M+wnfy5tH0l801Y/Gez8Q1fbK2uykuiyiUSy
MNNk/KzmOYuffwItuyeL3mmWHXsN+izUIeMmMxfL9X9nssZXRsrDXc+MByS7w0fk
QOeZmXHTxXwxFymr0t0DLK2eKEG6cqQty1KWp6iLx3uwnMTGo+576P41Q+boj64s
VllWzmuR0Ta0xuSR4sDvEFCO7OCFEgVdn1j0FvhRFskPEDrbQgXRLq8i3awtU6z1
NIh+A30XeK+EZLv0sEje6gav5lZHWMfAeCOKJstVzOl8+NQibuKTUrsqLgTrBK6K
plw8qwvZYjSnYErzHfywlq9ArufIvOHYcx9Nb76tLNy9E+y01yo=
=15Hm
-----END PGP SIGNATURE-----
Merge tag 'pull-riscv-to-apply-20241002' of https://github.com/alistair23/qemu into staging
RISC-V PR for 9.2
* Add a property to set vl to ceil(AVL/2)
* Enable numamem testing for RISC-V
* Consider MISA bit choice in implied rule
* Fix the za64rs priv spec requirements
* Enable Bit Manip for OpenTitan Ibex CPU
* Fix the group bit setting of AIA with KVM
* Stop timer with infinite timecmp
* Add 'fcsr' register to QEMU log as a part of F extension
* Fix riscv64 build on musl libc
* Add preliminary textra trigger CSR functions
* RISC-V bsd-user support
* Respect firmware ELF entry point
* Add Svvptc extension support
* Fix masking of rv32 physical address
* Fix linking problem with semihosting disabled
* Fix IMSIC interrupt state updates
# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEEaukCtqfKh31tZZKWr3yVEwxTgBMFAmb83lYACgkQr3yVEwxT
# gBNndBAAmh66yWt9TeTHlQ/rgBhx2nUMBbfICBWQyNGvPlslffwrNoLkh8jpkuiP
# PD0RQArAAGeM09cgCZCu14JzIBmmNiGgUxsUnqOZvUw18uIlLFlpt/tiT7iGw/Xb
# pfI7waF66/FPXBErY2yiw9/RGQLlkiGNBC9FNYrD/kCahf9MSIobv85tOgSQ2qjH
# nOJ+UBN0TQ1x0Z5lJMj9Pzl1WDvelRnCkYI5nXg1heKG73Hm7GmHt99QpTV2Okqn
# T3jFzEfMTQeHO4nC/X2pbaesE62K+mTg/FZpId2iV8lMCSm1zKof+xJ4boKM9RB2
# 0HjXAT+MveLuLUNtgfbV9C+VgU25M+wnfy5tH0l801Y/Gez8Q1fbK2uykuiyiUSy
# MNNk/KzmOYuffwItuyeL3mmWHXsN+izUIeMmMxfL9X9nssZXRsrDXc+MByS7w0fk
# QOeZmXHTxXwxFymr0t0DLK2eKEG6cqQty1KWp6iLx3uwnMTGo+576P41Q+boj64s
# VllWzmuR0Ta0xuSR4sDvEFCO7OCFEgVdn1j0FvhRFskPEDrbQgXRLq8i3awtU6z1
# NIh+A30XeK+EZLv0sEje6gav5lZHWMfAeCOKJstVzOl8+NQibuKTUrsqLgTrBK6K
# plw8qwvZYjSnYErzHfywlq9ArufIvOHYcx9Nb76tLNy9E+y01yo=
# =15Hm
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 02 Oct 2024 06:47:02 BST
# gpg: using RSA key 6AE902B6A7CA877D6D659296AF7C95130C538013
# gpg: Good signature from "Alistair Francis <alistair@alistair23.me>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 6AE9 02B6 A7CA 877D 6D65 9296 AF7C 9513 0C53 8013
* tag 'pull-riscv-to-apply-20241002' of https://github.com/alistair23/qemu: (35 commits)
bsd-user: Add RISC-V 64-bit Target Configuration and Debug XML Files
bsd-user: Implement set_mcontext and get_ucontext_sigreturn for RISCV
bsd-user: Implement 'get_mcontext' for RISC-V
bsd-user: Implement RISC-V signal trampoline setup functions
bsd-user: Define RISC-V signal handling structures and constants
bsd-user: Add generic RISC-V64 target definitions
bsd-user: Define RISC-V system call structures and constants
bsd-user: Define RISC-V VM parameters and helper functions
bsd-user: Add RISC-V thread setup and initialization support
bsd-user: Implement RISC-V sysarch system call emulation
bsd-user: Add RISC-V signal trampoline setup function
bsd-user: Define RISC-V register structures and register copying
bsd-user: Add RISC-V ELF definitions and hardware capability detection
bsd-user: Implement RISC-V TLS register setup
bsd-user: Implement RISC-V CPU register cloning and reset functions
bsd-user: Add RISC-V CPU execution loop and syscall handling
bsd-user: Implement RISC-V CPU initialization and main loop
hw/intc: riscv-imsic: Fix interrupt state updates.
target/riscv/cpu_helper: Fix linking problem with semihosting disabled
target/riscv32: Fix masking of physical address
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
is_host_cpu_intel() should return TRUE if the host cpu in Intel based, otherwise
it should return FALSE. Currently, it returns zero (FALSE) when the host CPU
is INTEL and non-zero otherwise. Fix the function so that it agrees more with
the semantics. Adjust the calling logic accordingly. RAPL needs Intel host cpus.
If the host CPU is not Intel baseed, we should report error.
Signed-off-by: Ani Sinha <anisinha@redhat.com>
Link: https://lore.kernel.org/r/20240903080004.33746-1-anisinha@redhat.com
[While touching the code remove too many spaces from the second part of the
error. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
kvm_filer_msr() is only used from i386 kvm module. Make it static so that its
easy for developers to understand that its not used anywhere else.
Same for QEMURDMSRHandler, QEMUWRMSRHandler and KVMMSRHandlers definitions.
CC: philmd@linaro.org
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Ani Sinha <anisinha@redhat.com>
Link: https://lore.kernel.org/r/20240903140045.41167-1-anisinha@redhat.com
[Make struct unnamed. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Because the index value of the VMCS field encoding of FRED injected-event
data (one of the newly added VMCS fields for FRED transitions), 0x52, is
larger than any existing index value, raise the highest index value used
for any VMCS encoding to 0x52.
Because the index value of the VMCS field encoding of Secondary VM-exit
controls, 0x44, is larger than any existing index value, raise the highest
index value used for any VMCS encoding to 0x44.
Co-developed-by: Xin Li <xin3.li@intel.com>
Signed-off-by: Xin Li <xin3.li@intel.com>
Signed-off-by: Lei Wang <lei4.wang@intel.com>
Signed-off-by: Xin Li (Intel) <xin@zytor.com>
Link: https://lore.kernel.org/r/20240807081813.735158-4-xin@zytor.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add definitions of
1) VM-exit activate secondary controls bit
2) VM-entry load FRED bit
which are required to enable nested FRED.
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Signed-off-by: Xin Li (Intel) <xin@zytor.com>
Link: https://lore.kernel.org/r/20240807081813.735158-3-xin@zytor.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
If QEMU has been configured with "--without-default-devices", the build
is currently failing with:
/usr/bin/ld: libqemu-riscv32-softmmu.a.p/target_riscv_cpu_helper.c.o:
in function `riscv_cpu_do_interrupt':
.../qemu/target/riscv/cpu_helper.c:1678:(.text+0x2214): undefined
reference to `do_common_semihosting'
We always want semihosting to be enabled if TCG is available, so change
the "imply" statements in the Kconfig file to "select", and make sure to
avoid calling into do_common_semihosting() if TCG is not available.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240906094858.718105-1-thuth@redhat.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
C doesn't extend the sign bit for unsigned types since there isn't a
sign bit to extend. This means a promotion of a u32 to a u64 results
in the upper 32 bits of the u64 being zero. If that result is then
used as a mask on another u64 the upper 32 bits will be cleared. rv32
physical addresses may be up to 34 bits wide, so we don't want to
clear the high bits while page aligning the address. The fix is to
use hwaddr for the mask, which, even on rv32, is 64-bits wide.
Fixes: af3fc195e3 ("target/riscv: Change the TLB page size depends on PMP entries.")
Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20240909083241.43836-2-ajones@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The Svvptc extension describes a uarch that does not cache invalid TLB
entries: that's the case for qemu so there is nothing particular to
implement other than the introduction of this extension.
Since qemu already exposes Svvptc behaviour, let's enable it by default
since it allows to drastically reduce the number of sfence.vma emitted
by S-mode.
Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240828083651.203861-1-alexghiti@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
According to RISC-V Debug specification, the optional textra32 and
textra64 trigger CSRs can be used to configure additional matching
conditions for the triggers. For example, if the textra.MHSELECT field
is set to 4 (mcontext), this trigger will only match or fire if the low
bits of mcontext/hcontext equal textra.MHVALUE field.
This commit adds the aforementioned matching condition as common trigger
matching conditions. Currently, the only legal values of textra.MHSELECT
are 0 (ignore) and 4 (mcontext). When textra.MHSELECT is 0, we pass the
checking. When textra.MHSELECT is 4, we compare textra.MHVALUE with
mcontext CSR. The remaining fields, such as textra.SBYTEMASK,
textra.SVALUE, and textra.SSELECT, are hardwired to zero for now. Thus,
we skip checking them here.
Signed-off-by: Alvin Chang <alvinga@andestech.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240826024657.262553-3-alvinga@andestech.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
This commit allows program to write textra trigger CSR for type 2, 3, 6
triggers. In this preliminary patch, the textra.MHVALUE and the
textra.MHSELECT fields are allowed to be configured. Other fields, such
as textra.SBYTEMASK, textra.SVALUE, and textra.SSELECT, are hardwired to
zero for now.
For textra.MHSELECT field, the only legal values are 0 (ignore) and 4
(mcontext). Writing 1~3 into textra.MHSELECT will be changed to 0, and
writing 5~7 into textra.MHSELECT will be changed to 4. This behavior is
aligned to RISC-V SPIKE simulator.
Signed-off-by: Alvin Chang <alvinga@andestech.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240826024657.262553-2-alvinga@andestech.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
FCSR is a part of F extension. Print it to log if FPU option is enabled.
Signed-off-by: Maria Klauchek <m.klauchek@syntacore.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240902103433.18424-1-m.klauchek@syntacore.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
While the spec doesn't state it, setting timecmp to UINT64_MAX is
another way to stop a timer, as it's considered setting the next
timer event to occur at infinity. And, even if the time CSR does
eventually reach UINT64_MAX, the very next tick will bring it back to
zero, once again less than timecmp. For this reason
riscv_timer_write_timecmp() special cases UINT64_MAX. However, if a
previously set timecmp has not yet expired, then setting timecmp to
UINT64_MAX to disable / stop it would not work, as the special case
left the previous QEMU timer active, which would then still deliver
an interrupt at that previous timecmp time. Ensure the stopped timer
will not still deliver an interrupt by also deleting the QEMU timer
in the UINT64_MAX special case.
Fixes: ae0edf2188 ("target/riscv: No need to re-start QEMU timer when timecmp == UINT64_MAX")
Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240829084002.1805006-2-ajones@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Just as the hart bit setting of the AIA should be calculated as
ceil(log2(max_hart_id + 1)) the group bit setting should be
calculated as ceil(log2(max_group_id + 1)). The hart bits are
implemented by passing max_hart_id to find_last_bit() and adding
one to the result. Do the same for the group bit setting.
Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20240821075040.498945-2-ajones@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The OpenTitan Ibex CPU now supports the the Zba, Zbb, Zbc
and Zbs bit-manipulation sub-extensions ratified in
v.1.0.0 of the RISC-V Bit- Manipulation ISA Extension, so let's enable
them in QEMU as well.
1: https://github.com/lowRISC/opentitan/pull/9748
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20240823003231.3522113-1-alistair.francis@wdc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
za64rs requires priv 1.12 when enabled by priv 1.11.
This fixes annoying warning:
warning: disabling za64rs extension for hart 0x00000000 because privilege spec version does not match
on priv 1.11 CPUs.
Fixes: 68c9e54bea ("target/riscv: do not enable all named features by default")
Signed-off-by: Vladimir Isaev <vladimir.isaev@syntacore.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20240823063431.17474-1-vladimir.isaev@syntacore.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Gitlab issue [1] reports a misleading error when trying to run a 'rv64'
cpu with 'zfinx' and without 'f':
$ ./build/qemu-system-riscv64 -nographic -M virt -cpu rv64,zfinx=true,f=false
qemu-system-riscv64: Zfinx cannot be supported together with F extension
The user explicitly disabled F and the error message mentions a conflict
with Zfinx and F.
The problem isn't the error reporting, but the logic used when applying
the implied ZFA rule that enables RVF unconditionally, without honoring
user choice (i.e. keep F disabled).
Change cpu_enable_implied_rule() to check if the user deliberately
disabled a MISA bit. In this case we shouldn't either re-enable the bit
nor apply any implied rules related to it.
After this change the error message now shows:
$ ./build/qemu-system-riscv64 -nographic -M virt -cpu rv64,zfinx=true,f=false
qemu-system-riscv64: Zfa extension requires F extension
Disabling 'zfa':
$ ./build/qemu-system-riscv64 -nographic -M virt -cpu rv64,zfinx=true,f=false,zfa=false
qemu-system-riscv64: D extension requires F extension
And finally after disabling 'd':
$ ./build/qemu-system-riscv64 -nographic -M virt -cpu rv64,zfinx=true,f=false,zfa=false,d=false
(OpenSBI boots ...)
[1] https://gitlab.com/qemu-project/qemu/-/issues/2486
Cc: Frank Chang <frank.chang@sifive.com>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2486
Fixes: 047da861f9 ("target/riscv: Introduce extension implied rule helpers")
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240824173338.316666-1-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
RVV spec allows implementations to set vl with values within
[ceil(AVL/2),VLMAX] when VLMAX < AVL < 2*VLMAX. This commit adds a
property "rvv_vl_half_avl" to enable setting vl = ceil(AVL/2). This
behavior helps identify compiler issues and bugs.
Signed-off-by: Jason Chien <jason.chien@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Message-ID: <20240722175004.23666-1-jason.chien@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
target_ulong is typedef'ed as a 32-bit integer when building the
qemu-system-arm target, and this is smaller than the size of an
intermediate physical address when LPAE is being used.
Given that Linux may place leaf level user page tables in high memory
when built for LPAE, the kernel will crash with an external abort as
soon as it enters user space when running with more than ~3 GiB of
system RAM.
So replace target_ulong with vaddr in places where it may carry an
address value that is not representable in 32 bits.
Fixes: f3639a64f6 ("target/arm: Use softmmu tlbs for page table walking")
Cc: qemu-stable@nongnu.org
Reported-by: Arnd Bergmann <arnd@arndb.de>
Tested-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Message-id: 20240927071051.1444768-1-ardb+git@google.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This patch is part of a series that moves towards a consistent use of
g_assert_not_reached() rather than an ad hoc mix of different
assertion mechanisms.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Message-ID: <20240919044641.386068-23-pierrick.bouvier@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
This patch is part of a series that moves towards a consistent use of
g_assert_not_reached() rather than an ad hoc mix of different
assertion mechanisms.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Message-ID: <20240919044641.386068-22-pierrick.bouvier@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
This patch is part of a series that moves towards a consistent use of
g_assert_not_reached() rather than an ad hoc mix of different
assertion mechanisms.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20240919044641.386068-15-pierrick.bouvier@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
This patch is part of a series that moves towards a consistent use of
g_assert_not_reached() rather than an ad hoc mix of different
assertion mechanisms.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Message-ID: <20240919044641.386068-7-pierrick.bouvier@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
The XT check for the lxvx/stxvx instructions is currently
inverted. This was introduced during the move to decodetree.
>From the ISA:
Chapter 7. Vector-Scalar Extension Facility
Load VSX Vector Indexed X-form
lxvx XT,RA,RB
if TX=0 & MSR.VSX=0 then VSX_Unavailable()
if TX=1 & MSR.VEC=0 then Vector_Unavailable()
...
Let XT be the value 32×TX + T.
The code currently does the opposite:
if (paired || a->rt >= 32) {
REQUIRE_VSX(ctx);
} else {
REQUIRE_VECTOR(ctx);
}
This was already fixed for lxv/stxv at commit "2cc0e449d1 (target/ppc:
Fix lxv/stxv MSR facility check)", but the indexed forms were missed.
Cc: qemu-stable@nongnu.org
Fixes: 70426b5bb7 ("target/ppc: moved stxvx and lxvx from legacy to decodtree")
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Claudio Fontana <cfontana@suse.de>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Message-ID: <20240911141651.6914-1-farosas@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The 'GPL-2.0+' license identifier has been deprecated since license
list version 2.0rc2 [1] and replaced by the 'GPL-2.0-or-later' [2]
tag.
[1] https://spdx.org/licenses/GPL-2.0+.html
[2] https://spdx.org/licenses/GPL-2.0-or-later.html
Mechanical patch running:
$ sed -i -e s/GPL-2.0+/GPL-2.0-or-later/ \
$(git grep -lP 'SPDX-License-Identifier: \W+GPL-2.0\+[ $]' \
| egrep -v '^linux-headers|^include/standard-headers')
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
The 'LGPL-2.0+' license identifier has been deprecated since license
list version 2.0rc2 [1] and replaced by the 'LGPL-2.0-or-later' [2]
tag.
[1] https://spdx.org/licenses/LGPL-2.0+.html
[2] https://spdx.org/licenses/LGPL-2.0-or-later.html
Mechanical patch running:
$ sed -i -e s/LGPL-2.0+/LGPL-2.0-or-later/ \
$(git grep -l 'SPDX-License-Identifier: LGPL-2.0+$')
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Since commits 139c1837db ("meson: rename included C source files
to .c.inc") and 0979ed017f ("meson: rename .inc.h files to .h.inc"),
EMU standard procedure for included header files is to use *.h.inc.
Besides, since commit 6a0057aa22 ("docs/devel: make a statement
about includes") this is documented in the Coding Style:
If you do use template header files they should be named with
the ``.c.inc`` or ``.h.inc`` suffix to make it clear they are
being included for expansion.
Therefore rename "macros.inc" as "macros.h.inc".
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Brian Cain <bcain@quicinc.com>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
in many cases, <zlib.h> is only included for crc32 function,
and in some of them, there's a comment saying that, but in
a different way. In one place (hw/net/rtl8139.c), there was
another #include added between the comment and <zlib.h> include.
Make all such comments to be on the same line as #include, make
it consistent, and also add a few missing comments, including
hw/nvram/mac_nvram.c which uses adler32 instead.
There's no code changes.
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
The Neoverse-V1 TRM is a bit confused about the layout of the
ID_AA64ISAR1_EL1 register, and so its table 3-6 has the wrong value
for this ID register. Trust instead section 3.2.74's list of which
fields are set.
This means that we stop incorrectly reporting FEAT_XS as present, and
now report the presence of FEAT_BF16.
Cc: qemu-stable@nongnu.org
Reported-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240917161337.3012188-1-peter.maydell@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-30-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-29-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-28-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
While these functions really do return a 32-bit value,
widening the return type means that we need do less
marshalling between TCG types.
Remove NeonGenNarrowEnvFn typedef; add NeonGenOne64OpEnvFn.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240912024114.1097832-27-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-26-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This includes SHL and SLI.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-25-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This includes SSHR, USHR, SSRA, USRA, SRSHR, URSHR,
SRSRA, URSRA, SRI.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-24-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-23-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There isn't a lot of commonality along the different paths of
handle_shri_with_rndacc. Split them out to separate functions,
which will be usable during the decodetree conversion.
Simplify 64-bit rounding operations to not require double-word arithmetic.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-22-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We always pass the same value for round; compute it
within common code.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-21-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-20-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Combine the right shift with the extension via
the tcg extract operations.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-19-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This includes SHL and SLI.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-18-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This includes SSHR, USHR, SSRA, USRA, SRSHR, URSHR, SRSRA, URSRA, SRI.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-17-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-16-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Handle the two special cases within these new
functions instead of higher in the call stack.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use simple shift and add instead of ctpop, ctz, shift and mask.
Unlike SVE, there is no predicate to disable elements.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The extract2 tcg op performs the same operation
as the do_ext64 function.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Instead of cmp+and or cmp+andc, use cmpsel. This will
be better for hosts that use predicate registers for cmp.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Instead of cmp+and or cmp+andc, use cmpsel. This will
be better for hosts that use predicate registers for cmp.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Instead of copying a constant into a temporary with dupi,
use a vector constant directly.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Instead of copying a constant into a temporary with dupi,
use a vector constant directly.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The 'any' CPU is deprecated since commit f57d5f8004
("target/riscv: deprecate the 'any' CPU type"). Users
are better off using the default CPUs or the 'max' CPU.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20240724130717.95629-1-philmd@linaro.org>
The CRIS target is deprecated since v9.0 (commit c7bbef4023
"docs: mark CRIS support as deprecated").
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
Message-ID: <20240904143603.52934-14-philmd@linaro.org>
This patch allows for easier manipulation of the cache description
register, CCSIDR. Which is helpful for testing as well. Currently,
numbers get hard-coded and might be prone to errors.
Therefore, this patch adds a wrapper for different types of CPUs
available in tcg to decribe caches. One function `make_ccsidr` supports
two cases by carrying a parameter as FORMAT that can be LEGACY and
CCIDX which determines the specification of the register.
For CCSIDR register, 32 bit version follows specification [1].
Conversely, 64 bit version follows specification [2].
[1] B4.1.19, ARM Architecture Reference Manual ARMv7-A and ARMv7-R
edition, https://developer.arm.com/documentation/ddi0406
[2] D23.2.29, ARM Architecture Reference Manual for A-profile Architecture,
https://developer.arm.com/documentation/ddi0487/latest/
Signed-off-by: Alireza Sanaee <alireza.sanaee@huawei.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20240903144550.280-1-alireza.sanaee@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This patch's main focus is to use the previously added
hvf_get_physical_address_range to inform VM creation
about the IPA size we need for the VM, so we can extend
the default 36b IPA size and support VMs with 64+GB of
RAM. This is done by freezing the memory map, computing
the highest GPA and then (depending on if the platform
supports an IPA size that large) telling the kernel to
use a size >= for the VM. In pursuit of this a couple of
things related to how we handle the physical address range
we expose to guests were altered, but for an explanation of
what we were doing:
Today, to get the IPA size we were reading id_aa64mmfr0_el1's
PARange field from a newly made vcpu. Unfortunately, HVF just
returns the hosts PARange directly for the initial value and
not the IPA size that will actually back the VM, so we believe
we have much more address space than we actually do today it seems.
Starting in macOS 13.0 some APIs were introduced to be able to
query the maximum IPA size the kernel supports, and to set the IPA
size for a given VM. However, this still has a couple of issues
on < macOS 15. Up until macOS 15 (and if the hardware supported
it) the max IPA size was 39 bits which is not a valid PARange
value, so we can't clamp down what we advertise in the vcpu's
id_aa64mmfr0_el1 to our IPA size. Starting in macOS 15 however,
the maximum IPA size is 40 bits (if it's supported in the hardware
as well) which is also a valid PARange value so we can set our IPA
size to the maximum as well as clamp down the PARange we advertise
to the guest. This allows VMs with 64+ GB of RAM and should fix the
oddness of the PARange situation as well.
Signed-off-by: Danny Canter <danny_canter@apple.com>
Message-id: 20240828111552.93482-4-danny_canter@apple.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is preliminary work to split up hv_vm_create
logic per platform so we can support creating VMs
with > 64GB of RAM on Apple Silicon machines. This
is done via ARM HVF's hv_vm_config_create() (and
other APIs that modify this config that will be
coming in future patches). This should have no
behavioral difference at all as hv_vm_config_create()
just assigns the same default values as if you just
passed NULL to the function.
Signed-off-by: Danny Canter <danny_canter@apple.com>
Message-id: 20240828111552.93482-3-danny_canter@apple.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Change the data type of the ioctl _request_ argument from 'int' to
'unsigned long' for the various accel/kvm functions which are
essentially wrappers around the ioctl() syscall.
The correct type for ioctl()'s 'request' argument is confused:
* POSIX defines the request argument as 'int'
* glibc uses 'unsigned long' in the prototype in sys/ioctl.h
* the glibc info documentation uses 'int'
* the Linux manpage uses 'unsigned long'
* the Linux implementation of the syscall uses 'unsigned int'
If we wrap ioctl() with another function which uses 'int' as the
type for the request argument, then requests with the 0x8000_0000
bit set will be sign-extended when the 'int' is cast to
'unsigned long' for the call to ioctl().
On x86_64 one such example is the KVM_IRQ_LINE_STATUS request.
Bit requests with the _IOC_READ direction bit set, will have the high
bit set.
Fortunately the Linux Kernel truncates the upper 32bit of the request
on 64bit machines (because it uses 'unsigned int', and see also Linus
Torvalds' comments in
https://sourceware.org/bugzilla/show_bug.cgi?id=14362 )
so this doesn't cause active problems for us. However it is more
consistent to follow the glibc ioctl() prototype when we define
functions that are essentially wrappers around ioctl().
This resolves a Coverity issue where it points out that in
kvm_get_xsave() we assign a value (KVM_GET_XSAVE or KVM_GET_XSAVE2)
to an 'int' variable which can't hold it without overflow.
Resolves: Coverity CID 1547759
Signed-off-by: Johannes Stoelp <johannes.stoelp@gmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20240815122747.3053871-1-peter.maydell@linaro.org
[PMM: Rebased patch, adjusted commit message, included note about
Coverity fix, updated the type of the local var in kvm_get_xsave,
updated the comment in the KVMState struct definition]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The Alpha and HPPA CPU class structs include a 'parent_reset'
field which is never used; delete them.
(These targets don't seem to implement reset at all; if they did they
should do it using the three-phase reset mechanism, which uses a
'ResettablePhases parent_phases' field instead of the old
'DeviceReset parent_reset' field.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20240830145812.1967042-6-peter.maydell@linaro.org
Convert the s390 CPU to the Resettable interface. This is slightly
more involved than the other CPU types were (see commits
9130cade5fc22..d66e64dd006df) because S390 has its own set of
different kinds of reset with different behaviours that it needs to
trigger.
We handle this by adding these reset types to the Resettable
ResetType enum. Now instead of having an underlying implementation
of reset that is s390-specific and which might be called either
directly or via the DeviceClass::reset method, we can implement only
the Resettable hold phase method, and have the places that need to
trigger an s390-specific reset type do so by calling
resettable_reset().
The other option would have been to smuggle in the s390 reset
type via, for instance, a field in the CPU state that we set
in s390_do_cpu_initial_reset() etc and then examined in the
reset method, but doing it this way seems cleaner.
The motivation for this change is that this is the last caller
of the legacy device_class_set_parent_reset() function, and
removing that will let us clean up some glue code that we added
for the transition to three-phase reset.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Nina Schoetterl-Glausch <nsg@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Thomas Huth <thuth@redhat.com>
Message-id: 20240830145812.1967042-4-peter.maydell@linaro.org
-----BEGIN PGP SIGNATURE-----
iLMEAAEKAB0WIQS4/x2g0v3LLaCcbCxAov/yOSY+3wUCZuLmLgAKCRBAov/yOSY+
38JNA/9UdorT4a7H+H5PhNeEu2EHDgMPb7+gxyYKw03mOG2MB3KFzkK0LRQShaPt
ADJmIqAFlc9SJLkbo6ELMDl+ZnUU9OdC/P6YU5iBG71zx1PonMwuyJTWhlBwxWcG
+OB8aDBUALoe/Gb4za152I84cR08g58TgLnXNfEkCM8lnPfAug==
=Plwu
-----END PGP SIGNATURE-----
Merge tag 'pull-loongarch-20240912' of https://gitlab.com/gaosong/qemu into staging
pull-loongarch-20240912
# -----BEGIN PGP SIGNATURE-----
#
# iLMEAAEKAB0WIQS4/x2g0v3LLaCcbCxAov/yOSY+3wUCZuLmLgAKCRBAov/yOSY+
# 38JNA/9UdorT4a7H+H5PhNeEu2EHDgMPb7+gxyYKw03mOG2MB3KFzkK0LRQShaPt
# ADJmIqAFlc9SJLkbo6ELMDl+ZnUU9OdC/P6YU5iBG71zx1PonMwuyJTWhlBwxWcG
# +OB8aDBUALoe/Gb4za152I84cR08g58TgLnXNfEkCM8lnPfAug==
# =Plwu
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 12 Sep 2024 14:01:34 BST
# gpg: using RSA key B8FF1DA0D2FDCB2DA09C6C2C40A2FFF239263EDF
# gpg: Good signature from "Song Gao <m17746591750@163.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: B8FF 1DA0 D2FD CB2D A09C 6C2C 40A2 FFF2 3926 3EDF
* tag 'pull-loongarch-20240912' of https://gitlab.com/gaosong/qemu:
hw/loongarch: Add acpi SPCR table support
hw/loongarch: virt: pass random seed to fdt
hw/loongarch: virt: support up to 4 serial ports
target/loongarch: Support QMP dump-guest-memory
target/loongarch/kvm: Add vCPU reset function
hw/loongarch: Remove default enable with VIRTIO_VGA device
target/loongarch: Add compatible support about VM reboot
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add the support needed for creating prstatus elf notes. This allows
us to use QMP dump-guest-memory.
Now ELF notes of LoongArch only supports general elf notes, LSX and
LASX is not supported, since it is mainly used to dump guest memory.
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Reviewed-by: Song Gao <gaosong@loongson.cn>
Tested-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20240822065245.2286214-1-maobibo@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
KVM provides interface KVM_REG_LOONGARCH_VCPU_RESET to reset vCPU,
it can be used to clear internal state about kvm kernel. vCPU reset
function is added here for kvm mode.
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Reviewed-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20240822022827.2273534-1-maobibo@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
With edk2-stable202408 LoongArch UEFI bios, CSR PGD register is set only
if its value is equal to zero for boot cpu, it causes reboot issue. Since
CSR PGD register is changed with linux kernel, UEFI BIOS cannot use it.
Add workaround to clear CSR registers relative with TLB in function
loongarch_cpu_reset_hold(), so that VM can reboot with edk2-stable202408
UEFI bios.
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Reviewed-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20240827035807.3326293-1-maobibo@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Model fp_exception state, in which only fp stores are allowed
until such time as the FQ has been flushed.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Carl Hauser <chauser@pullman.com>
Invalid encoding of addr should raise TT_ILL_INSN, so
check before supervisor, which might raise TT_PRIV_INSN.
Clear QNE after execution.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2340
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Carl Hauser <chauser@pullman.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Carl Hauser <chauser@pullman.com>
Implement a single instruction floating point queue,
populated while delivering an fp exception.
Signed-off-by: Carl Hauser <chauser@pullman.com>
[rth: Split from a larger patch]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Carl Hauser <chauser@pullman.com>
Add support for, and migrate, a single-entry fp
instruction queue for sparc32.
Signed-off-by: Carl Hauser <chauser@pullman.com>
[rth: Split from a larger patch;
adjust representation with union;
add migration state]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Carl Hauser <chauser@pullman.com>
- remove docker-armel-cross
- update i686 and mipsel images to bookworm
- use docker-all-test-cross for mips64le tests
- fix duplicated line in docs
- update gitlab-runner ansible script
- support MTE in gdbstub for system mode
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEEZoWumedRZ7yvyN81+9DbCVqeKkQFAmbgye8ACgkQ+9DbCVqe
KkTesQf/WSTYAelzJWlEo0EPg5agokephfza4vdmweDujOT8MYPF9qxfsxoiTVA8
GTtTOod9iqmY/4/EPKIqUtZH38oaX5h9on2FhSssOMy+N4lUADJ+CcHHMSj4BuUt
jTXDSa9e5aj0m/yqg2PjF8U12Sygf7dKJturGLOWoWR5qa3xpQ2a6c3CkfxO3RQK
yTBfIZk47iOrVvEX8chsRzpkpiXY6/S5hkZZwcqbXcUMKH2s0po9Yg031vE3yN+g
kxJ7/mFNl49E/fqYdRahhyBDORlltCglCHsacxxa/4a216wOsNKyV3QLCJMjq8yO
3/SPu0p+UouSFcASwTUt5XIo0G0TcA==
=7W1s
-----END PGP SIGNATURE-----
Merge tag 'pull-testing-gdbstub-oct-100924-1' of https://gitlab.com/stsquad/qemu into staging
testing and gdbstub updates:
- remove docker-armel-cross
- update i686 and mipsel images to bookworm
- use docker-all-test-cross for mips64le tests
- fix duplicated line in docs
- update gitlab-runner ansible script
- support MTE in gdbstub for system mode
# -----BEGIN PGP SIGNATURE-----
#
# iQEzBAABCgAdFiEEZoWumedRZ7yvyN81+9DbCVqeKkQFAmbgye8ACgkQ+9DbCVqe
# KkTesQf/WSTYAelzJWlEo0EPg5agokephfza4vdmweDujOT8MYPF9qxfsxoiTVA8
# GTtTOod9iqmY/4/EPKIqUtZH38oaX5h9on2FhSssOMy+N4lUADJ+CcHHMSj4BuUt
# jTXDSa9e5aj0m/yqg2PjF8U12Sygf7dKJturGLOWoWR5qa3xpQ2a6c3CkfxO3RQK
# yTBfIZk47iOrVvEX8chsRzpkpiXY6/S5hkZZwcqbXcUMKH2s0po9Yg031vE3yN+g
# kxJ7/mFNl49E/fqYdRahhyBDORlltCglCHsacxxa/4a216wOsNKyV3QLCJMjq8yO
# 3/SPu0p+UouSFcASwTUt5XIo0G0TcA==
# =7W1s
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 10 Sep 2024 23:36:31 BST
# gpg: using RSA key 6685AE99E75167BCAFC8DF35FBD0DB095A9E2A44
# gpg: Good signature from "Alex Bennée (Master Work Key) <alex.bennee@linaro.org>" [full]
# Primary key fingerprint: 6685 AE99 E751 67BC AFC8 DF35 FBD0 DB09 5A9E 2A44
* tag 'pull-testing-gdbstub-oct-100924-1' of https://gitlab.com/stsquad/qemu:
tests/tcg/aarch64: Extend MTE gdbstub tests to system mode
tests/tcg/aarch64: Improve linker script organization
tests/guest-debug: Support passing arguments to the GDB test script
gdbstub: Add support for MTE in system mode
gdbstub: Use specific MMU index when probing MTE addresses
scripts/ci: update the gitlab-runner playbook
docs/devel: fix duplicate line
tests/docker: use debian-all-test-cross for mips64el tests
tests/docker: update debian i686 and mipsel images to bookworm
tests/docker: remove debian-armel-cross
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This commit makes handle_q_memtag, handle_q_isaddresstagged, and
handle_Q_memtag stubs build for system mode, allowing all GDB
'memory-tag' subcommands to work with QEMU gdbstub on aarch64 system
mode.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/620
Signed-off-by: Gustavo Romero <gustavo.romero@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240906143316.657436-3-gustavo.romero@linaro.org>
[AJB: add #ifdef CONFIG_TCG guards]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20240910173900.4154726-8-alex.bennee@linaro.org>
Use cpu_mmu_index() to determine the specific translation regime (MMU
index) before probing addresses using allocation_tag_mem_probe().
Currently, the MMU index is hardcoded to 0 and only works for user mode.
By obtaining the specific MMU index according to the translation regime,
future use of the stubs relying on allocation_tag_mem_probe in other
regimes will be possible, like in EL1.
This commit also changes the ptr_size value passed to
allocation_tag_mem_probe() from 8 to 1. The ptr_size parameter actually
represents the number of bytes in the memory access (which can be as
small as 1 byte), rather than the number of bits used in the address
space pointed to by ptr.
Signed-off-by: Gustavo Romero <gustavo.romero@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240906143316.657436-2-gustavo.romero@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20240910173900.4154726-7-alex.bennee@linaro.org>
QAPI's 'prefix' feature can make the connection between enumeration
type and its constants less than obvious. It's best used with
restraint.
QCryptoHashAlgorithm has a 'prefix' that overrides the generated
enumeration constants' prefix to QCRYPTO_HASH_ALG.
We could simply drop 'prefix', but then the prefix becomes
QCRYPTO_HASH_ALGORITHM, which is rather long.
We could additionally rename the type to QCryptoHashAlg, but I think
the abbreviation "alg" is less than clear.
Rename the type to QCryptoHashAlgo instead. The prefix becomes to
QCRYPTO_HASH_ALGO.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Acked-by: Daniel P. Berrangé <berrange@redhat.com>
Message-ID: <20240904111836.3273842-12-armbru@redhat.com>
[Conflicts with merge commit 7bbadc60b5 resolved]
QAPI's 'prefix' feature can make the connection between enumeration
type and its constants less than obvious. It's best used with
restraint.
CpuS390Entitlement has a 'prefix' to change the generated enumeration
constants' prefix from CPU_S390_ENTITLEMENT to S390_CPU_ENTITLEMENT.
Rename the type to S390CpuEntitlement, so that 'prefix' is not needed.
Likewise change CpuS390Polarization to S390CpuPolarization, and
CpuS390State to S390CpuState.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Acked-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20240904111836.3273842-10-armbru@redhat.com>
- Steve's cleanup of unused variable
- Peter Maydell's fixes for several leaks in migration-test
- Fabiano's flexibilization of multifd data structures for device
state migration
- Arman Nabiev's fix for ppc e500 migration
- Thomas' fix for migration-test vs. --without-default-devices
-----BEGIN PGP SIGNATURE-----
iQJEBAABCAAuFiEEqhtIsKIjJqWkw2TPx5jcdBvsMZ0FAmbYVXwQHGZhcm9zYXNA
c3VzZS5kZQAKCRDHmNx0G+wxnRucEAC1vo046UGdUmbb4PaF5vKAg97io6RB2nrH
HMz56Yc0AcAKRUGwe2Z80e2jY8B6zi8Ha8b9l7cVsej095eGCF+tINIL4wRX4lHm
alDY/LkhuqjE5g5c/DaeTztyBOFLvdWHPU5eJyDOC9r7kSlnUcL1gAslH23b8uL0
xvhPVKaTWjGIzNL1q/XfBr1WgRGqfD6dYb32HJDTq85yOnUT5sEr55aoEEu0euKh
MYbXPmi5AMbrp8nP21kzUopX8iYERRdoKwhF0ZssciGi/qJVevH70tNdbDEQSxyp
+vtP54TnL3LrzD4uY5Snng9zT9h0QrZujY79OEcxu20U0s29OQaudWkIjp7yLLUv
UnPZHS+bIyaS53DdpV94GKGGBX1wrjGC/sn8eGYzmb2yMlMjLTBoE8L5r9cadshX
XTeF4MtKGqaS3xDM2fIgACHHFl6qr/l0nENspv0raFzpf9Jx/WbpekghvTuWN6/B
pZHnoOTNiAqXS/Rnyy829vsQ0Pw4hi6wx79Z73RP+35ubZTgTmOsQx9f2FjuEh6k
JS+q9k4VJ+nntUWsYn4GS1Jlt+FXJ2hfzNj1NNFN4xLT1oioc6pCHsQyV7SBArB1
ml2zYyfKCTC3riIRhcv/ew6OcKbhHcPFOpd/v0y40LO3mx8S0LZnUWXkcrl3XIZS
Mj5CBdlFgA==
=SRN4
-----END PGP SIGNATURE-----
Merge tag 'migration-20240904-pull-request' of https://gitlab.com/farosas/qemu into staging
Migration pull request
- Steve's cleanup of unused variable
- Peter Maydell's fixes for several leaks in migration-test
- Fabiano's flexibilization of multifd data structures for device
state migration
- Arman Nabiev's fix for ppc e500 migration
- Thomas' fix for migration-test vs. --without-default-devices
# -----BEGIN PGP SIGNATURE-----
#
# iQJEBAABCAAuFiEEqhtIsKIjJqWkw2TPx5jcdBvsMZ0FAmbYVXwQHGZhcm9zYXNA
# c3VzZS5kZQAKCRDHmNx0G+wxnRucEAC1vo046UGdUmbb4PaF5vKAg97io6RB2nrH
# HMz56Yc0AcAKRUGwe2Z80e2jY8B6zi8Ha8b9l7cVsej095eGCF+tINIL4wRX4lHm
# alDY/LkhuqjE5g5c/DaeTztyBOFLvdWHPU5eJyDOC9r7kSlnUcL1gAslH23b8uL0
# xvhPVKaTWjGIzNL1q/XfBr1WgRGqfD6dYb32HJDTq85yOnUT5sEr55aoEEu0euKh
# MYbXPmi5AMbrp8nP21kzUopX8iYERRdoKwhF0ZssciGi/qJVevH70tNdbDEQSxyp
# +vtP54TnL3LrzD4uY5Snng9zT9h0QrZujY79OEcxu20U0s29OQaudWkIjp7yLLUv
# UnPZHS+bIyaS53DdpV94GKGGBX1wrjGC/sn8eGYzmb2yMlMjLTBoE8L5r9cadshX
# XTeF4MtKGqaS3xDM2fIgACHHFl6qr/l0nENspv0raFzpf9Jx/WbpekghvTuWN6/B
# pZHnoOTNiAqXS/Rnyy829vsQ0Pw4hi6wx79Z73RP+35ubZTgTmOsQx9f2FjuEh6k
# JS+q9k4VJ+nntUWsYn4GS1Jlt+FXJ2hfzNj1NNFN4xLT1oioc6pCHsQyV7SBArB1
# ml2zYyfKCTC3riIRhcv/ew6OcKbhHcPFOpd/v0y40LO3mx8S0LZnUWXkcrl3XIZS
# Mj5CBdlFgA==
# =SRN4
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 04 Sep 2024 13:41:32 BST
# gpg: using RSA key AA1B48B0A22326A5A4C364CFC798DC741BEC319D
# gpg: issuer "farosas@suse.de"
# gpg: Good signature from "Fabiano Rosas <farosas@suse.de>" [unknown]
# gpg: aka "Fabiano Almeida Rosas <fabiano.rosas@suse.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: AA1B 48B0 A223 26A5 A4C3 64CF C798 DC74 1BEC 319D
* tag 'migration-20240904-pull-request' of https://gitlab.com/farosas/qemu: (34 commits)
tests/qtest/migration: Add a check for the availability of the "pc" machine
target/ppc: Fix migration of CPUs with TLB_EMB TLB type
migration/multifd: Add documentation for multifd methods
migration/multifd: Add a couple of asserts for p->iov
migration/multifd: Fix p->iov leak in multifd-uadk.c
migration/multifd: Stop changing the packet on recv side
migration/multifd: Make MultiFDMethods const
migration/multifd: Move nocomp code into multifd-nocomp.c
migration/multifd: Register nocomp ops dynamically
migration/multifd: Standardize on multifd ops names
migration/multifd: Allow multifd sync without flush
migration/multifd: Replace multifd_send_state->pages with client data
migration/multifd: Don't send ram data during SYNC
migration/multifd: Isolate ram pages packet data
migration/multifd: Remove total pages tracing
migration/multifd: Move pages accounting into multifd_send_zero_page_detect()
migration/multifd: Replace p->pages with an union pointer
migration/multifd: Make MultiFDPages_t:offset a flexible array member
migration/multifd: Introduce MultiFDSendData
migration/multifd: Pass in MultiFDPages_t to file_write_ramblock_iov
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In vfp.decode we have the names of the VFNMA and VFNMS instructions
the wrong way around. The architecture says that bit 6 is the 'op'
bit, which is 1 for VFNMA and 0 for VFNMS, but we label these two
lines of decode the other way around. This doesn't cause any
user-visible problem because in the handling of these functions in
translate-vfp.c we give VFNMA the behaviour specified for VFNMS and
vice-versa, but it's confusing when reading the code.
Switch the names of the VFP VFNMA and VFNMS instructions in
the decode file and flip the behaviour also.
NB: the instructions VFMA and VFMS *are* decoded with op=0 for
VFMA and op=1 for VFMS; the confusion probably arose because
we assumed VFNMA and VFNMS to be the same way around.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2536
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240830152156.2046590-1-peter.maydell@linaro.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Now that we've implemented the required behaviour for FEAT_EBF16, we
can enable it for the "max" CPU type, list it in our documentation,
and delete a TODO comment about it being missing.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Implement the FPCR.EBF=1 semantics for bfdotadd() operations:
* is_ebf() sets up fpst and fpst_odd
* bfdotadd_ebf() implements the fused paired-multiply-and-add
operation that we need
The paired-multiply-and-add is similar to f16_dotadd() and
we use the same trick here as in that function, but the inputs
here are bfloat16 rather than float16.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
We use bfdotadd() in four callsites for various helper functions. Currently
this all assumes that we have the FPCR.EBF=0 semantics. For FPCR.EBF=1
we will need to:
* call a different routine to bfdotadd() because we need to do a
fused multiply-add rather than separate multiply and add steps
* use a different float_status that honours the FPCR rounding mode
and denormal-flushing fields
* pass in an extra float_status that has been set up to perform
round-to-odd rounding
To prepare for this, refactor all the callsites so that instead of
for (...) {
x = bfdotadd(...);
}
they are:
float_status fpst, fpst_odd;
if (is_ebf(env, &fpst, &fpst_odd)) {
for (...) {
x = bfdotadd_ebf(..., &fpst, &fpst_odd);
}
} else {
for (...) {
x = bfdotadd(..., &fpst);
}
}
For the moment the is_ebf() function always returns false, sets up
fpst for EBF=0 semantics and never sets up fpst_odd; bfdotadd_ebf()
will assert if called. We'll fill in the handling for EBF=1 in the
next commit.
This change should be a zero-behaviour-change refactor.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Pass the env pointer through to the gvec_bfmmla helper,
so we can use it to add support for FEAT_EBF16.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Pass the env pointer through to the gvec_bfdot_idx helper,
so we can use it to add support for FEAT_EBF16.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Pass the env pointer through to the gvec_bfdot helper,
so we can use it to add support for FEAT_EBF16.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
To implement the FEAT_EBF16 semantics, we are going to need
the CPUARMState env pointer in every helper function which calls
bfdotadd().
Pass the env pointer through from generated code to the sme_bfmopa
helper. (We'll add the code that uses it when we've adjusted
all the helpers to have access to the env pointer.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
FEAT_EBF16 adds one new bit to the FPCR floating point control
register. Allow this bit to be read and written when the ID
registers indicate the presence of the feature.
Note that because this new bit is not in FPSCR_FPCR_MASK the bit is
not visible in the AArch32 FPSCR, and FPSCR writes do not affect it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
The linux-user hppa target crashes randomly for me since commit
081a0ed188 ("target/hppa: Do not mask in copy_iaoq_entry").
That commit dropped the masking of the IAOQ addresses while copying them
from other registers and instead keeps them with all 64 bits up until
the full gva is formed with the help of hppa_form_gva_psw().
So, when running in linux-user mode on an emulated 64-bit CPU, we need
to mask to a 32-bit address space at the very end in hppa_form_gva_psw()
if the PSW-W flag isn't set (which is the case for linux-user on hppa).
Fixes: 081a0ed188 ("target/hppa: Do not mask in copy_iaoq_entry")
Cc: qemu-stable@nongnu.org # v9.1+
Signed-off-by: Helge Deller <deller@gmx.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
While adding hppa64 support, the psw_v variable got extended from 32 to 64
bits. So, when packaging the PSW-V bit from the psw_v variable for interrupt
processing, check bit 31 instead the 63th (sign) bit.
This fixes a hard to find Linux kernel boot issue where the loss of the PSW-V
bit due to an ITLB interruption in the middle of a series of ds/addc
instructions (from the divU milicode library) generated the wrong division
result and thus triggered a Linux kernel crash.
Link: https://lore.kernel.org/lkml/718b8afe-222f-4b3a-96d3-93af0e4ceff1@roeck-us.net/
Reported-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Helge Deller <deller@gmx.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Fixes: 931adff314 ("target/hppa: Update cpu_hppa_get/put_psw for hppa64")
Cc: qemu-stable@nongnu.org # v8.2+
In vmstate_tlbemb a cut-and-paste error meant we gave
this vmstate subsection the same "cpu/tlb6xx" name as
the vmstate_tlb6xx subsection. This breaks migration load
for any CPU using the TLB_EMB CPU type, because when we
see the "tlb6xx" name in the incoming data we try to
interpret it as a vmstate_tlb6xx subsection, which it
isn't the right format for:
$ qemu-system-ppc -drive
if=none,format=qcow2,file=/home/petmay01/test-images/virt/dummy.qcow2
-monitor stdio -M bamboo
QEMU 9.0.92 monitor - type 'help' for more information
(qemu) savevm foo
(qemu) loadvm foo
Missing section footer for cpu
Error: Error -22 while loading VM state
Correct the incorrect vmstate section name. Since migration
for these CPU types was completely broken before, we don't
need to care that this is a migration compatibility break.
This affects the PPC 405, 440, 460 and e200 CPU families.
Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2522
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Arman Nabiev <nabiev.arman13@gmail.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
The two limit_max variables represent size - 1, just like the
encoding in the GDT, thus the 'old' access was off by one.
Access the minimal size of the new tss: the complete tss contains
the iopb, which may be a larger block than the access api expects,
and irrelevant because the iopb is not accessed during the
switch itself.
Fixes: 8b13106508 ("target/i386/tcg: use X86Access for TSS access")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2511
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240819074052.207783-1-richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
BLSI has inverted semantics for C as compared to the other two
BMI1 instructions, BLSMSK and BLSR. Introduce CC_OP_BLSI* for
this purpose.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2175
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20240801075845.573075-3-richard.henderson@linaro.org>
Split out the TCG_COND_TSTEQ logic from gen_prepare_eflags_z,
and use it for CC_OP_BMILG* as well. Prepare for requiring
both zero and non-zero senses.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240801075845.573075-2-richard.henderson@linaro.org>
- Null pointer dereference in IPI IOCSR (Jiaxun)
- Correct '-smbios type=4' in man page (Heinrich)
- Use correct MMU index in MIPS get_pte (Phil)
- Reset MPQEMU remote message using device_cold_reset (Peter)
- Update linux-user MIPS CPU list (Phil)
- Do not let exec_command read console if no pattern to wait for (Nick)
- Remove shadowed declaration warning (Pierrick)
- Restrict STQF opcode to SPARC V9 (Richard)
- Add missing Kconfig dependency for POWERNV ISA serial port (Bernhard)
- Do not allow vmport device without i8042 PS/2 controller (Kamil)
- Fix QCryptoTLSCredsPSK leak (Peter)
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmbDzAsACgkQ4+MsLN6t
wN7SvBAAwM0Frtg4ZKDZQu8XgMjLq1xVoSWjC3YJZKTpyGap5gO+7StvHg0sf9iB
YyGqocCO+qdj9a7pTSasfGDyufpwoIZkOqkwGUWKBos76cOcHWt4e/gkl9O65Lf1
VVKX4/xdY+a5w2eVAAdWWrYdaPWkKLm0ZZXKoeSIvN4R9A41j7J4kANhE2SweczF
NnTt2gBnSlpRzghlVWPJKhnq+aYbvLeR7ApdNGUJDpSI1ZTh9gH1GtZFwBN7aeDo
PvDucoui0EmuyHTVdOYOH3zihTfzKlNZECcT3Y6/6i8y5p7jLHyINHHexsKw6T56
i5RidJMPTfM0EO6LU1GvUN5FzZy24zXOf298Fe/GMYczQsOznQd4+aFHYPb3d4hZ
8Vc1wB1s8XF5WGj+7bchBAUdynUnbwUqfMOb2pMXLIm21pSDnOTVgmYMnp1Kt4AA
9WbHiS6tUJf/HjQsep8BBNGUiVSsUPDNNhL8QN43u2C0NgNRPgtRuIV+ytgVXS1G
2t1QiRX0lX4ACHmw88agUCU3OhorumuDOpoitQK5jn2VutT7TqbGgibkQMFSgn9E
Xwrmtlf7nYU9MVgXYJjH2bBh7wbOmQCqbHniEj0targkxccAMJoswG4vtKsP9zkd
tBs6qMiZ8qSj5eoq8JBRF8bF4tONmboPZjRlboACJ0kTD5wCElA=
=lPMG
-----END PGP SIGNATURE-----
Merge tag 'hw-misc-20240820' of https://github.com/philmd/qemu into staging
Various fixes
- Null pointer dereference in IPI IOCSR (Jiaxun)
- Correct '-smbios type=4' in man page (Heinrich)
- Use correct MMU index in MIPS get_pte (Phil)
- Reset MPQEMU remote message using device_cold_reset (Peter)
- Update linux-user MIPS CPU list (Phil)
- Do not let exec_command read console if no pattern to wait for (Nick)
- Remove shadowed declaration warning (Pierrick)
- Restrict STQF opcode to SPARC V9 (Richard)
- Add missing Kconfig dependency for POWERNV ISA serial port (Bernhard)
- Do not allow vmport device without i8042 PS/2 controller (Kamil)
- Fix QCryptoTLSCredsPSK leak (Peter)
# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmbDzAsACgkQ4+MsLN6t
# wN7SvBAAwM0Frtg4ZKDZQu8XgMjLq1xVoSWjC3YJZKTpyGap5gO+7StvHg0sf9iB
# YyGqocCO+qdj9a7pTSasfGDyufpwoIZkOqkwGUWKBos76cOcHWt4e/gkl9O65Lf1
# VVKX4/xdY+a5w2eVAAdWWrYdaPWkKLm0ZZXKoeSIvN4R9A41j7J4kANhE2SweczF
# NnTt2gBnSlpRzghlVWPJKhnq+aYbvLeR7ApdNGUJDpSI1ZTh9gH1GtZFwBN7aeDo
# PvDucoui0EmuyHTVdOYOH3zihTfzKlNZECcT3Y6/6i8y5p7jLHyINHHexsKw6T56
# i5RidJMPTfM0EO6LU1GvUN5FzZy24zXOf298Fe/GMYczQsOznQd4+aFHYPb3d4hZ
# 8Vc1wB1s8XF5WGj+7bchBAUdynUnbwUqfMOb2pMXLIm21pSDnOTVgmYMnp1Kt4AA
# 9WbHiS6tUJf/HjQsep8BBNGUiVSsUPDNNhL8QN43u2C0NgNRPgtRuIV+ytgVXS1G
# 2t1QiRX0lX4ACHmw88agUCU3OhorumuDOpoitQK5jn2VutT7TqbGgibkQMFSgn9E
# Xwrmtlf7nYU9MVgXYJjH2bBh7wbOmQCqbHniEj0targkxccAMJoswG4vtKsP9zkd
# tBs6qMiZ8qSj5eoq8JBRF8bF4tONmboPZjRlboACJ0kTD5wCElA=
# =lPMG
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 20 Aug 2024 08:49:47 AM AEST
# gpg: using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE
# gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [full]
* tag 'hw-misc-20240820' of https://github.com/philmd/qemu:
crypto/tlscredspsk: Free username on finalize
hw/i386/pc: Ensure vmport prerequisites are fulfilled
hw/i386/pc: Unify vmport=auto handling
hw/ppc/Kconfig: Add missing SERIAL_ISA dependency to POWERNV machine
target/sparc: Restrict STQF to sparcv9
contrib/plugins/execlog: Fix shadowed declaration warning
tests/avocado: Mark ppc_hv_tests.py as non-flaky after fixed console interaction
tests/avocado: exec_command should not consume console output
linux-user/mips: Select Loongson CPU for Loongson binaries
linux-user/mips: Select MIPS64R2-generic for Rel2 binaries
linux-user/mips: Select Octeon68XX CPU for Octeon binaries
linux-user/mips: Do not try to use removed R5900 CPU
hw/remote/message.c: Don't directly invoke DeviceClass:reset
hw/dma/xilinx_axidma: Use semicolon at end of statement, not comma
target/mips: Load PTE as DATA
target/mips: Use correct MMU index in get_pte()
target/mips: Pass page table entry size as MemOp to get_pte()
qemu-options.hx: correct formatting -smbios type=4
hw/mips/loongson3_virt: Fix condition of IPI IOCSR connection
hw/mips/loongson3_virt: Store core_iocsr into LoongsonMachineState
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Prior to sparcv9, the same encoding was STDFQ.
Cc: qemu-stable@nongnu.org
Fixes: 06c060d9e5 ("target/sparc: Move simple fp load/store to decodetree")
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20240816072311.353234-2-richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
PTE is not CODE so load it as normal DATA access.
Fixes: 074cfcb4da ("Implement hardware page table walker for MIPS32")
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20240814090452.2591-4-philmd@linaro.org>
When refactoring page_table_walk_refill() in commit 4e999bf419
we missed the indirect call to cpu_mmu_index() in get_pte():
page_table_walk_refill()
-> get_pte()
-> cpu_ld[lq]_code()
-> cpu_mmu_index()
Since we don't mask anymore the modes in hflags, cpu_mmu_index()
can return UM or SM, while we only expect KM or ERL.
Fix by propagating ptw_mmu_idx to get_pte(), and use the
cpu_ld/st_code_mmu() API with the correct MemOpIdx.
Reported-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Reported-by: Waldemar Brodkorb <wbx@uclibc-ng.org>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2470
Fixes: 4e999bf419 ("target/mips: Pass ptw_mmu_idx down from mips_cpu_tlb_fill")
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20240814090452.2591-3-philmd@linaro.org>
In order to simplify the next commit, pass the PTE size as MemOp.
Rename:
native_shift -> native_op
directory_shift -> directory_mop
leaf_shift -> leaf_mop
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20240814090452.2591-2-philmd@linaro.org>
When we are using TCG plugin memory callbacks probe_access_internal
will return TLB_MMIO to force the slow path for memory access. This
results in probe_access returning NULL but the x86 access_ptr function
happily accepts an empty haddr resulting in segfault hilarity.
Check for an empty haddr to prevent the segfault and enable plugins to
track all the memory operations for the x86 save/restore helpers. As
we also want to run the slow path when instrumenting *-user we should
also not have the short cutting test_ptr macro.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2489
Fixes: 6d03226b42 (plugins: force slow path when plugins instrument memory ops)
Reviewed-by: Alexandre Iooss <erdnaxe@crans.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20240813202329.1237572-8-alex.bennee@linaro.org>
Found on debian stable.
../target/s390x/tcg/translate.c: In function ‘get_mem_index’:
../target/s390x/tcg/translate.c:398:1: error: control reaches end of non-void function [-Werror=return-type]
398 | }
Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-ID: <20240814224132.897098-4-pierrick.bouvier@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Snapshot of the stat utime and stime for each thread, taken before and
after the pause, must be stored in separate locations
Signed-off-by: Anthony Harivel <aharivel@redhat.com>
Link: https://lore.kernel.org/r/20240807124320.1741124-2-aharivel@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* fix incorrect application of REX to MMX operands
* fix crash on module load
* update Italian translation
-----BEGIN PGP SIGNATURE-----
iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAma7kZ4UHHBib256aW5p
QHJlZGhhdC5jb20ACgkQv/vSX3jHroOy7QgAriuxfgw3Yvu9UPPfEZT5V9p5XfDf
LceO3C6OABIkFoGSO8WK5dWfQy3oYbrwEXX/l/PW1lUc2DFrSUo9YtIfjelRkxoC
0EAAbV5A+xCLYmujFqBSe/6usRj82uKjSET1KK1aCam7ONZLNZf2yb4OwdShvLSN
MPgtBOrwznR1qh3KJtLB6YSRC0Rie1hOxbXFpx1AklXYnIiqUdMjXOHSjs+Amva0
VczuqwjtVdNDTPqbZlCXatPtZ8nwYeEOD2jOqgjAoEwwabZ1fFGDCNXlqEDLSdTm
Cc+IZPYU5a8+tVfH0DYEMgMSkRhDUqVZ/076L+pRi+Q8ClxWV8fKsf5qKw==
=jJtu
-----END PGP SIGNATURE-----
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
* fix --static compilation of hexagon
* fix incorrect application of REX to MMX operands
* fix crash on module load
* update Italian translation
# -----BEGIN PGP SIGNATURE-----
#
# iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAma7kZ4UHHBib256aW5p
# QHJlZGhhdC5jb20ACgkQv/vSX3jHroOy7QgAriuxfgw3Yvu9UPPfEZT5V9p5XfDf
# LceO3C6OABIkFoGSO8WK5dWfQy3oYbrwEXX/l/PW1lUc2DFrSUo9YtIfjelRkxoC
# 0EAAbV5A+xCLYmujFqBSe/6usRj82uKjSET1KK1aCam7ONZLNZf2yb4OwdShvLSN
# MPgtBOrwznR1qh3KJtLB6YSRC0Rie1hOxbXFpx1AklXYnIiqUdMjXOHSjs+Amva0
# VczuqwjtVdNDTPqbZlCXatPtZ8nwYeEOD2jOqgjAoEwwabZ1fFGDCNXlqEDLSdTm
# Cc+IZPYU5a8+tVfH0DYEMgMSkRhDUqVZ/076L+pRi+Q8ClxWV8fKsf5qKw==
# =jJtu
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 14 Aug 2024 03:02:22 AM AEST
# gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg: issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
* tag 'for-upstream' of https://gitlab.com/bonzini/qemu:
po: update Italian translation
module: Prevent crash by resetting local_err in module_load_qom_all()
target/i386: Assert MMX and XMM registers in range
target/i386: Use unit not type in decode_modrm
target/i386: Do not apply REX to MMX operands
target/hexagon: don't look for static glib
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Our current usage of MMU indexes when EL3 is AArch32 is confused.
Architecturally, when EL3 is AArch32, all Secure code runs under the
Secure PL1&0 translation regime:
* code at EL3, which might be Mon, or SVC, or any of the
other privileged modes (PL1)
* code at EL0 (Secure PL0)
This is different from when EL3 is AArch64, in which case EL3 is its
own translation regime, and EL1 and EL0 (whether AArch32 or AArch64)
have their own regime.
We claimed to be mapping Secure PL1 to our ARMMMUIdx_EL3, but didn't
do anything special about Secure PL0, which meant it used the same
ARMMMUIdx_EL10_0 that NonSecure PL0 does. This resulted in a bug
where arm_sctlr() incorrectly picked the NonSecure SCTLR as the
controlling register when in Secure PL0, which meant we were
spuriously generating alignment faults because we were looking at the
wrong SCTLR control bits.
The use of ARMMMUIdx_EL3 for Secure PL1 also resulted in the bug that
we wouldn't honour the PAN bit for Secure PL1, because there's no
equivalent _PAN mmu index for it.
We could fix this in one of two ways:
* The most straightforward is to add new MMU indexes EL30_0,
EL30_3, EL30_3_PAN to correspond to "Secure PL1&0 at PL0",
"Secure PL1&0 at PL1", and "Secure PL1&0 at PL1 with PAN".
This matches how we use indexes for the AArch64 regimes, and
preserves propirties like being able to determine the privilege
level from an MMU index without any other information. However
it would add two MMU indexes (we can share one with ARMMMUIdx_EL3),
and we are already using 14 of the 16 the core TLB code permits.
* The more complicated approach is the one we take here. We use
the same MMU indexes (E10_0, E10_1, E10_1_PAN) for Secure PL1&0
than we do for NonSecure PL1&0. This saves on MMU indexes, but
means we need to check in some places whether we're in the
Secure PL1&0 regime or not before we interpret an MMU index.
The changes in this commit were created by auditing all the places
where we use specific ARMMMUIdx_ values, and checking whether they
needed to be changed to handle the new index value usage.
Note for potential stable backports: taking also the previous
(comment-change-only) commit might make the backport easier.
Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2326
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Bernhard Beschow <shentey@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240809160430.1144805-3-peter.maydell@linaro.org
We have a long comment describing the Arm architectural translation
regimes and how we map them to QEMU MMU indexes. This comment has
got a bit out of date:
* FEAT_SEL2 allows Secure EL2 and corresponding new regimes
* FEAT_RME introduces Realm state and its translation regimes
* We now model the Cortex-R52 so that is no longer a hypothetical
* We separated Secure Stage 2 and NonSecure Stage 2 MMU indexes
* We have an MMU index per physical address spacea
Add the missing pieces so that the list of architectural translation
regimes matches the Arm ARM, and the list and count of QEMU MMU
indexes in the comment matches the enum.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Bernhard Beschow <shentey@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240809160430.1144805-2-peter.maydell@linaro.org