The same statement is executed unconditionally right before the if statement.
Cc: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Bernhard Beschow <shentey@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241103143330.123596-4-shentey@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Bernhard Beschow <shentey@gmail.com>
Message-id: 20241103143330.123596-3-shentey@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Bernhard Beschow <shentey@gmail.com>
Message-id: 20241103143330.123596-2-shentey@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Like 9971cbac2f, which set CAPSTONE_AARCH64_COMPAT_HEADER,
also set CAPSTONE_SYSTEMZ_COMPAT_HEADER. Fixes the build
against capstone v6-alpha.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Gustavo Romero <gustavo.romero@linaro.org>
Message-id: 20241022013047.830273-1-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Our implementation of the indexed version of SVE SDOT/UDOT/USDOT got
the calculation of the inner loop terminator wrong. Although we
correctly account for the element size when we calculate the
terminator for the first iteration:
intptr_t segend = MIN(16 / sizeof(TYPED), opr_sz_n);
we don't do that when we move it forward after the first inner loop
completes. The intention is that we process the vector in 128-bit
segments, which for a 64-bit element size should mean (1, 2), (3, 4),
(5, 6), etc. This bug meant that we would iterate (1, 2), (3, 4, 5,
6), (7, 8, 9, 10) etc and apply the wrong indexed element to some of
the operations, and also index off the end of the vector.
You don't see this bug if the vector length is small enough that we
don't need to iterate the outer loop, i.e. if it is only 128 bits,
or if it is the 64-bit special case from AA32/AA64 AdvSIMD. If the
vector length is 256 bits then we calculate the right results for the
elements in the vector but do index off the end of the vector. Vector
lengths greater than 256 bits see wrong answers. The instructions
that produce 32-bit results behave correctly.
Fix the recalculation of 'segend' for subsequent iterations, and
restore a version of the comment that was lost in the refactor of
commit 7020ffd656 that explains why we only need to clamp segend to
opr_sz_n for the first iteration, not the later ones.
Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2595
Fixes: 7020ffd656 ("target/arm: Macroize helper_gvec_{s,u}dot_idx_{b,h}")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241101185544.2130972-1-peter.maydell@linaro.org
Our current usage of MMU indexes when EL3 is AArch32 is confused.
Architecturally, when EL3 is AArch32, all Secure code runs under the
Secure PL1&0 translation regime:
* code at EL3, which might be Mon, or SVC, or any of the
other privileged modes (PL1)
* code at EL0 (Secure PL0)
This is different from when EL3 is AArch64, in which case EL3 is its
own translation regime, and EL1 and EL0 (whether AArch32 or AArch64)
have their own regime.
We claimed to be mapping Secure PL1 to our ARMMMUIdx_EL3, but didn't
do anything special about Secure PL0, which meant it used the same
ARMMMUIdx_EL10_0 that NonSecure PL0 does. This resulted in a bug
where arm_sctlr() incorrectly picked the NonSecure SCTLR as the
controlling register when in Secure PL0, which meant we were
spuriously generating alignment faults because we were looking at the
wrong SCTLR control bits.
The use of ARMMMUIdx_EL3 for Secure PL1 also resulted in the bug that
we wouldn't honour the PAN bit for Secure PL1, because there's no
equivalent _PAN mmu index for it.
Fix this by adding two new MMU indexes:
* ARMMMUIdx_E30_0 is for Secure PL0
* ARMMMUIdx_E30_3_PAN is for Secure PL1 when PAN is enabled
The existing ARMMMUIdx_E3 is used to mean "Secure PL1 without PAN"
(and would be named ARMMMUIdx_E30_3 in an AArch32-centric scheme).
These extra two indexes bring us up to the maximum of 16 that the
core code can currently support.
This commit:
* adds the new MMU index handling to the various places
where we deal in MMU index values
* adds assertions that we aren't AArch32 EL3 in a couple of
places that currently use the E10 indexes, to document why
they don't also need to handle the E30 indexes
* documents in a comment why regime_has_2_ranges() doesn't need
updating
Notes for backporting: this commit depends on the preceding revert of
4c2c04746932; that revert and this commit should probably be
backported to everywhere that we originally backported 4c2c047469.
Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2326
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2588
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241101142845.1712482-3-peter.maydell@linaro.org
This reverts commit 4c2c047469.
This commit tried to fix a problem with our usage of MMU indexes when
EL3 is AArch32, using what it described as a "more complicated
approach" where we share the same MMU index values for Secure PL1&0
and NonSecure PL1&0. In theory this should work, but the change
didn't account for (at least) two things:
(1) The design change means we need to flush the TLBs at any point
where the CPU state flips from one to the other. We already flush
the TLB when SCR.NS is changed, but we don't flush the TLB when we
take an exception from NS PL1&0 into Mon or when we return from Mon
to NS PL1&0, and the commit didn't add any code to do that.
(2) The ATS12NS* address translate instructions allow Mon code (which
is Secure) to do a stage 1+2 page table walk for NS. I thought this
was OK because do_ats_write() does a page table walk which doesn't
use the TLBs, so because it can pass both the MMU index and also an
ARMSecuritySpace argument we can tell the table walk that we want NS
stage1+2, not S. But that means that all the code within the ptw
that needs to find e.g. the regime EL cannot do so only with an
mmu_idx -- all these functions like regime_sctlr(), regime_el(), etc
would need to pass both an mmu_idx and the security_space, so they
can tell whether this is a translation regime controlled by EL1 or
EL3 (and so whether to look at SCTLR.S or SCTLR.NS, etc).
In particular, because regime_el() wasn't updated to look at the
ARMSecuritySpace it would return 1 even when the CPU was in Monitor
mode (and the controlling EL is 3). This meant that page table walks
in Monitor mode would look at the wrong SCTLR, TCR, etc and would
generally fault when they should not.
Rather than trying to make the complicated changes needed to rescue
the design of 4c2c047469, we revert it in order to instead take the
route that that commit describes as "the most straightforward" fix,
where we add new MMU indexes EL30_0, EL30_3, EL30_3_PAN to correspond
to "Secure PL1&0 at PL0", "Secure PL1&0 at PL1", and "Secure PL1&0 at
PL1 with PAN".
This revert will re-expose the "spurious alignment faults in
Secure PL0" issue #2326; we'll fix it again in the next commit.
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Thomas Huth <thuth@redhat.com>
Message-id: 20241101142845.1712482-2-peter.maydell@linaro.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Now that all targets have been converted to explicitly set a NaN
propagation rule, we can remove the set of target ifdefs (which now
list every target) and clean up the references to fallback behaviour
for float_2nan_prop_none.
The "default" case in the switch will catch any remaining places
where status->float_2nan_prop_rule was not set by the target.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241025141254.2141506-22-peter.maydell@linaro.org
Set the NaN propagation rule explicitly for the float_status word
used in the rx target.
This not the architecturally correct behaviour, but since this is a
no-behaviour-change patch, we leave a TODO note to that effect.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241025141254.2141506-21-peter.maydell@linaro.org
Set the NaN propagation rule explicitly for the float_status word
used in the openrisc target.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241025141254.2141506-20-peter.maydell@linaro.org
Set the NaN propagation rule explicitly for the float_status word
used in the microblaze target.
This is probably not the architecturally correct behaviour,
but since this is a no-behaviour-change patch, we leave a
TODO note to that effect.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241025141254.2141506-19-peter.maydell@linaro.org
Although the floating point rounding mode for Microblaze is always
nearest-even, we cannot set it just once in the CPU initfn. This is
because env->fp_status is in the part of the CPU state struct that is
zeroed on reset.
Move the call to set_float_rounding_mode() into the reset fn.
(This had no guest-visible effects because it happens that the
float_round_nearest_even enum value is 0, so when the struct was
zeroed it didn't corrupt the setting.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241025141254.2141506-18-peter.maydell@linaro.org
Set the NaN propagation rule explicitly for the float_status word
used in this target.
This is a no-behaviour-change commit, so we retain the existing
behaviour of x87-style pick-largest-significand NaN propagation.
This is however not the architecturally correct handling, so we leave
a TODO note to that effect.
We also leave a TODO note pointing out that all this code in the cpu
initfn (including the existing setting up of env->flags and the FPCR)
should be in a currently non-existent CPU reset function.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241025141254.2141506-17-peter.maydell@linaro.org
Set the NaN propagation rule explicitly for the float_status words
used in the x86 target.
This is a no-behaviour-change commit, so we retain the existing
behaviour of using the x87-style "prefer QNaN over SNaN, then prefer
the NaN with the larger significand" for MMX and SSE. This is
however not the documented hardware behaviour, so we leave a TODO
note about what we should be doing instead.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241025141254.2141506-16-peter.maydell@linaro.org
Set the NaN propagation rule explicitly in xtensa_use_first_nan().
(When we convert the softfloat pickNaNMulAdd routine to also
select a NaN propagation rule at runtime, we will be able to
remove the use_first_nan flag because the propagation rules
will handle everything.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241025141254.2141506-15-peter.maydell@linaro.org
In xtensa we currently call set_use_first_nan() in a lot of
places where we want to switch the NaN-propagation handling.
We're about to change the softfloat API we use to do that,
so start by factoring all the calls out into a single
xtensa_use_first_nan() function.
The bulk of this change was done with
sed -i -e 's/set_use_first_nan(\([^,]*\),[^)]*)/xtensa_use_first_nan(env, \1)/' target/xtensa/fpu_helper.c
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241025141254.2141506-14-peter.maydell@linaro.org
Set the NaN propagation rule explicitly in the float_status
words we use.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241025141254.2141506-13-peter.maydell@linaro.org
Currently we call cpu_put_fsr(0) in sparc_cpu_realizefn(), which
initializes various fields in the CPU struct:
* fsr_cexc_ftt
* fcc[]
* fsr_qne
* fsr
It also sets the rounding mode in env->fp_status.
This is largely pointless, because when we later reset the CPU
this will zero out all the fields up until the "end_reset_fields"
label, which includes all of these (but not fp_status!)
Move the cpu_put_fsr(env, 0) call to reset, because that expresses
the logical requirement: we want to reset FSR to 0 on every reset.
This isn't a behaviour change because the fields are all zero anyway.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241025141254.2141506-12-peter.maydell@linaro.org
In cf_fpu_gdb_get_reg() and cf_fpu_gdb_set_reg() we use a temporary
float_status variable to pass to floatx80_to_float64() and
float64_to_floatx80(), but we don't initialize it, meaning that those
functions could access uninitialized data. Zero-init the structs.
(We don't need to set a NaN-propagation rule here because we
don't use these with a 2-argument fpu operation.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241025141254.2141506-11-peter.maydell@linaro.org
Explicitly set the 2-NaN propagation rule on env->fp_status
and on the temporary fp_status that we use in frem (since
we pass that to a division operation function).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Set the 2-NaN propagation rule explicitly in env->fp_status
and env->vec_status.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241025141254.2141506-9-peter.maydell@linaro.org
Set the 2-NaN propagation rule explicitly in env->fpu_status.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241025141254.2141506-8-peter.maydell@linaro.org
Set the 2-NaN propagation rule explicitly in env->fp_status.
Really we only need to do this at CPU reset (after reset has zeroed
out most of the CPU state struct, which typically includes fp_status
fields). However target/hppa does not currently implement CPU reset
at all, so leave a TODO comment to note that this could be moved if
we ever do implement reset.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241025141254.2141506-7-peter.maydell@linaro.org
Set the 2-NaN propagation rule explicitly in the float_status word we
use.
(There are a couple of places in fpu_helper.c where we create a
dummy float_status word with "float_status *s = { };", but these
are only used for calling float*_is_quiet_nan() so it doesn't
matter that we don't set a 2-NaN propagation rule there.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241025141254.2141506-6-peter.maydell@linaro.org
Set the 2-NaN propagation rule explicitly in the float_status words
we use.
For active_fpu.fp_status, we do this in a new fp_reset() function
which mirrors the existing msa_reset() function in doing "first call
restore to set the fp status parts that depend on CPU state, then set
the fp status parts that are constant".
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20241025141254.2141506-5-peter.maydell@linaro.org
Set the 2-NaN propagation rule explicitly in the float_status words
we use. We wrap this plus the pre-existing setting of the
tininess-before-rounding flag in a new function
arm_set_default_fp_behaviours() to avoid repetition, since we have a
lot of float_status words at this point.
The situation with FPA11 emulation in linux-user is a little odd, and
arguably "correct" behaviour there would be to exactly match a real
Linux kernel's FPA11 emulation. However FPA11 emulation is
essentially dead at this point and so it seems better to continue
with QEMU's current behaviour and leave a comment describing the
situation.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241025141254.2141506-4-peter.maydell@linaro.org
Explicitly set a 2-NaN propagation rule in the softfloat tests. In
meson.build we put -DTARGET_ARM in fpcflags, and so we should select
here the Arm propagation rule of float_2nan_prop_s_ab.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241025141254.2141506-3-peter.maydell@linaro.org
IEEE 758 does not define a fixed rule for which NaN to pick as the
result if both operands of a 2-operand operation are NaNs. As a
result different architectures have ended up with different rules for
propagating NaNs.
QEMU currently hardcodes the NaN propagation logic into the binary
because pickNaN() has an ifdef ladder for different targets. We want
to make the propagation rule instead be selectable at runtime,
because:
* this will let us have multiple targets in one QEMU binary
* the Arm FEAT_AFP architectural feature includes letting
the guest select a NaN propagation rule at runtime
* x86 specifies different propagation rules for x87 FPU ops
and for SSE ops, and specifying the rule in the float_status
would let us emulate this, instead of wrongly using the
x87 rules everywhere
In this commit we add an enum for the propagation rule, the field in
float_status, and the corresponding getters and setters. We change
pickNaN to honour this, but because all targets still leave this
field at its default 0 value, the fallback logic will pick the rule
type with the old ifdef ladder.
It's valid not to set a propagation rule if default_nan_mode is
enabled, because in that case there's no need to pick a NaN; all the
callers of pickNaN() catch this case and skip calling it. So we can
already assert that we don't get into the "no rule defined" codepath
for our four targets which always set default_nan_mode: Hexagon,
RiscV, SH4 and Tricore, and for the one target which does not have FP
at all: avr. These targets will not need to be updated to call
set_float_2nan_prop_rule().
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20241025141254.2141506-2-peter.maydell@linaro.org
* Fixed eMMC size calculation
* Fixed IRQ definitions on AST2700
* Added RTC support to AST2700
* Fixed timer IRQ status on AST2600
* Improved SDHCI model with new registers
* Added -nodefaults support to AST1030
* Provided a way to use an eMMC device without boot partitions
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEoPZlSPBIlev+awtgUaNDx8/77KEFAmcoo4oACgkQUaNDx8/7
7KGYAQ/9GWiwM7SHFD/WTEo6iClQCk+Do3pzGXZPQq7WLqYhBU8mYwSaqMDUtXj+
MQVywyLxSYaKdCKessN0haATyzEDVRtxKwIRnbrSDWWnxG8NGj2esOTsU6/wgfD4
FqARaMH91FQB6rY8QbmbGmqTJ1QbWEPXj7v2piJol5dvI2Oe8iqn/6z1Cv4NMXwh
aYHwSVwcHLD9tfmyXP0DKN/XHLC4pTAOoU96ajcN6RRW+D6vuQEsQq0caZt8CHQc
I2oSptU+RZF2DPbSeEB42y9I138/kQzTIaVnbBN//NLRwbzRsLlXhA92F2CJyDrD
FGNQyynteil8F7M5Oab47fFia1QF/v4G45VOAsHpT1tLBsZPKJdRwfLLqDPZbVbG
2lAVuukqW0gKoEHsXfVsDzcIxpX81SlUsccHY4kCxsRNnwSzCWaDK9OOTx3CAxjG
CzzDgQszNr/12dzkWExIhLpMhQNeiUXX1veAH/jzbjyRAKxzjkDYaX2lUC3MfmqX
irjmzOU0AbtComv4ybeBqtqNmQvUx5/y993Hgakc9mqqCoAm/Fn4qtx6uW5vSSZJ
w4heyWbzcLp5RIzSYZypWlmgI+3bJgJq2aX276MYqAe3m8PnUCkuW9NTsfb+ARMl
XGExHPNrAsw7eiiQsTa7Byt/jkEf3KmEp8ye+3cAvJwPgxlDyys=
=ms8H
-----END PGP SIGNATURE-----
Merge tag 'pull-aspeed-20241104' of https://github.com/legoater/qemu into staging
aspeed queue:
* Fixed eMMC size calculation
* Fixed IRQ definitions on AST2700
* Added RTC support to AST2700
* Fixed timer IRQ status on AST2600
* Improved SDHCI model with new registers
* Added -nodefaults support to AST1030
* Provided a way to use an eMMC device without boot partitions
# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEEoPZlSPBIlev+awtgUaNDx8/77KEFAmcoo4oACgkQUaNDx8/7
# 7KGYAQ/9GWiwM7SHFD/WTEo6iClQCk+Do3pzGXZPQq7WLqYhBU8mYwSaqMDUtXj+
# MQVywyLxSYaKdCKessN0haATyzEDVRtxKwIRnbrSDWWnxG8NGj2esOTsU6/wgfD4
# FqARaMH91FQB6rY8QbmbGmqTJ1QbWEPXj7v2piJol5dvI2Oe8iqn/6z1Cv4NMXwh
# aYHwSVwcHLD9tfmyXP0DKN/XHLC4pTAOoU96ajcN6RRW+D6vuQEsQq0caZt8CHQc
# I2oSptU+RZF2DPbSeEB42y9I138/kQzTIaVnbBN//NLRwbzRsLlXhA92F2CJyDrD
# FGNQyynteil8F7M5Oab47fFia1QF/v4G45VOAsHpT1tLBsZPKJdRwfLLqDPZbVbG
# 2lAVuukqW0gKoEHsXfVsDzcIxpX81SlUsccHY4kCxsRNnwSzCWaDK9OOTx3CAxjG
# CzzDgQszNr/12dzkWExIhLpMhQNeiUXX1veAH/jzbjyRAKxzjkDYaX2lUC3MfmqX
# irjmzOU0AbtComv4ybeBqtqNmQvUx5/y993Hgakc9mqqCoAm/Fn4qtx6uW5vSSZJ
# w4heyWbzcLp5RIzSYZypWlmgI+3bJgJq2aX276MYqAe3m8PnUCkuW9NTsfb+ARMl
# XGExHPNrAsw7eiiQsTa7Byt/jkEf3KmEp8ye+3cAvJwPgxlDyys=
# =ms8H
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 04 Nov 2024 10:35:54 GMT
# gpg: using RSA key A0F66548F04895EBFE6B0B6051A343C7CFFBECA1
# gpg: Good signature from "Cédric Le Goater <clg@redhat.com>" [full]
# gpg: aka "Cédric Le Goater <clg@kaod.org>" [full]
# Primary key fingerprint: A0F6 6548 F048 95EB FE6B 0B60 51A3 43C7 CFFB ECA1
* tag 'pull-aspeed-20241104' of https://github.com/legoater/qemu:
aspeed: Don't set always boot properties of the emmc device
aspeed: Support create flash devices via command line for AST1030
hw/sd/aspeed_sdhci: Introduce Capabilities Register 2 for SD slot 0 and 1
hw/timer/aspeed: Fix interrupt status does not be cleared for AST2600
hw/timer/aspeed: Fix coding style
aspeed/soc: Support RTC for AST2700
hw/arm/aspeed_ast27x0: Avoid hardcoded '256' in IRQ calculation
hw/arm/aspeed_ast27x0: Use bsa.h for PPI definitions
hw/sd/sdcard: Fix calculation of size when using eMMC boot partitions
hw/arm: enable at24c with aspeed
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* Big cleanup of deprecated machines
* Power11 support for spapr
* XIVE improvements
* Goodbye to Cedric and David as ppc reviewers, thank you both o7
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEETkN92lZhb0MpsKeVZ7MCdqhiHK4FAmcoEicACgkQZ7MCdqhi
HK5M8Q//fz+ZkJndXkBjb1Oinx+q+eVtNm2JrvcWIsXyhG3K+6VxYPp69H+SRv/Z
TWuUqMQPxq8mhQvBJlDAttp/oaUEiOcCRvs/iUoBN12L4mVxXfdoT88TZ4frN3eP
8bePq+DW2N/7gpmsJm5CyEZPpcf9AjVHgLRp3KYFkOJ/14uzvuwnocU39gl+2IUh
MXHTedQgMNXaKorJXk1NVdM6NxMuVhOvwxAs6ya2gwhxyA5tteo5PiQOnDJWkejf
xg3RRsNzGYcs1Qg/3kFIf3RfEB0aYbPxROM8IfPaJWKN5KnMggj/JAkHyK1x/V3J
wml7+cB0doMt/yRiuYJhXpyrtOqpvjRWPA6RhxECWW2kwrovv8NAF8IrFnw9NvOQ
QC66ZaaFcbAcFrVT1e/iggU76d01II6m4OAgKcXw+FRHgps4VU9y83j7ApNnNUWN
IXp9hkzoHi5VwX0FrG4ELUr2iEf1HASMvM8EZ/0AxzWj5iNtQB8lFsrEdaGVXyIS
M5JaJeNjCn4koCyYaFSctH5eKtbzIwnGWnDcdTwaOuQ+9itBvY8O+HZalE6sAc5S
kLFZ7i/Ut/qxbY5pMumt8LKD4pR1SsOxFB8dJCmn/f/tvRGtIVsoY6btNe4M0+24
42MxZbWO6W379C32bwbtsPiGA+aLSgShjP4cWm9cgRjz4RJFnwg=
=vmIG
-----END PGP SIGNATURE-----
Merge tag 'pull-ppc-for-9.2-1-20241104' of https://gitlab.com/npiggin/qemu into staging
* Various bug fixes
* Big cleanup of deprecated machines
* Power11 support for spapr
* XIVE improvements
* Goodbye to Cedric and David as ppc reviewers, thank you both o7
# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCgAdFiEETkN92lZhb0MpsKeVZ7MCdqhiHK4FAmcoEicACgkQZ7MCdqhi
# HK5M8Q//fz+ZkJndXkBjb1Oinx+q+eVtNm2JrvcWIsXyhG3K+6VxYPp69H+SRv/Z
# TWuUqMQPxq8mhQvBJlDAttp/oaUEiOcCRvs/iUoBN12L4mVxXfdoT88TZ4frN3eP
# 8bePq+DW2N/7gpmsJm5CyEZPpcf9AjVHgLRp3KYFkOJ/14uzvuwnocU39gl+2IUh
# MXHTedQgMNXaKorJXk1NVdM6NxMuVhOvwxAs6ya2gwhxyA5tteo5PiQOnDJWkejf
# xg3RRsNzGYcs1Qg/3kFIf3RfEB0aYbPxROM8IfPaJWKN5KnMggj/JAkHyK1x/V3J
# wml7+cB0doMt/yRiuYJhXpyrtOqpvjRWPA6RhxECWW2kwrovv8NAF8IrFnw9NvOQ
# QC66ZaaFcbAcFrVT1e/iggU76d01II6m4OAgKcXw+FRHgps4VU9y83j7ApNnNUWN
# IXp9hkzoHi5VwX0FrG4ELUr2iEf1HASMvM8EZ/0AxzWj5iNtQB8lFsrEdaGVXyIS
# M5JaJeNjCn4koCyYaFSctH5eKtbzIwnGWnDcdTwaOuQ+9itBvY8O+HZalE6sAc5S
# kLFZ7i/Ut/qxbY5pMumt8LKD4pR1SsOxFB8dJCmn/f/tvRGtIVsoY6btNe4M0+24
# 42MxZbWO6W379C32bwbtsPiGA+aLSgShjP4cWm9cgRjz4RJFnwg=
# =vmIG
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 04 Nov 2024 00:15:35 GMT
# gpg: using RSA key 4E437DDA56616F4329B0A79567B30276A8621CAE
# gpg: Good signature from "Nicholas Piggin <npiggin@gmail.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 4E43 7DDA 5661 6F43 29B0 A795 67B3 0276 A862 1CAE
* tag 'pull-ppc-for-9.2-1-20241104' of https://gitlab.com/npiggin/qemu: (67 commits)
MAINTAINERS: Remove myself as reviewer
MAINTAINERS: Remove myself from XIVE
MAINTAINERS: Remove myself from the PowerNV machines
hw/ppc: Consolidate ppc440 initial mapping creation functions
hw/ppc: Consolidate e500 initial mapping creation functions
tests/qtest: Add XIVE tests for the powernv10 machine
pnv/xive2: TIMA CI ops using alternative offsets or byte lengths
pnv/xive2: TIMA support for 8-byte OS context push for PHYP
pnv/xive: Update PIPR when updating CPPR
pnv/xive: Add special handling for pool targets
ppc/xive2: Support "Pull Thread Context to Odd Thread Reporting Line"
ppc/xive2: Change context/ring specific functions to be generic
ppc/xive2: Support "Pull Thread Context to Register" operation
ppc/xive2: Allow 1-byte write of Target field in TIMA
ppc/xive2: Dump the VP-group and crowd tables with 'info pic'
ppc/xive2: Dump more NVP state with 'info pic'
pnv/xive2: Support for "OS LGS Push" TIMA operation
ppc/xive2: Support TIMA "Pull OS Context to Odd Thread Reporting Line"
pnv/xive2: Define OGEN field in the TIMA
pnv/xive: TIMA patch sets pre-req alignment and formatting changes
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Now replaced by meson build.
Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Message-Id: <20241023212812.1376972-4-pierrick.bouvier@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Tried to unify this meson.build with tests/tcg/plugins/meson.build but
the resulting modules are not output in the right directory.
Originally proposed by Anton Kochkov, thank you!
Solves: https://gitlab.com/qemu-project/qemu/-/issues/1710
Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Message-Id: <20241023212812.1376972-3-pierrick.bouvier@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
contrib/plugins/cflow.c: In function ‘plugin_exit’:
contrib/plugins/cflow.c:167:19: error: declaration of ‘n’ shadows a previous local [-Werror=shadow=local]
167 | NodeData *n = l->data;
| ^
contrib/plugins/cflow.c:139:9: note: shadowed declaration is here
139 | int n = 0;
| ^
Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Message-Id: <20241023212812.1376972-2-pierrick.bouvier@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Currently we only set it on fork.
Note: Upstream (blitz) commit also did new threads, but that code isn't
in qemu project repo yet.
Signed-off-by: Jessica Clarke <jrtc27@jrtc27.com>
Pull-Request: https://github.com/qemu-bsd-user/qemu-bsd-user/pull/52
Reviewed-by: Warner Losh <imp@bsdimp.com>
Signed-off-by: Warner Losh <imp@bsdimp.com>
While qemu-system can set tb-size using -accel tcg,tb-size=n, there
is no similar knob for qemu-bsd-user. Add one in a way similar to how
one-insn-per-tb is already handled.
Suggested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Warner Losh <imp@bsdimp.com>
Signed-off-by: Warner Losh <imp@bsdimp.com>
bsd-user qemu-x86_64 almost immediately dies with:
qemu: 0x4002201a68: unhandled CPU exception 0xd - aborting
on FreeBSD 14.1-RELEASE. This is an instruction that requires
alignment:
(gdb) x/i 0x4002201a68
0x4002201a68: movaps %xmm0,-0x40(%rbp)
and the argument is not aligned:
(gdb) p/x env->regs[5]
$1 = 0x822443b58
A quick experiment shows that the userspace entry point expects
misaligned rsp:
(gdb) starti
(gdb) p/x $rsp
$1 = 0x7fffffffeaa8
Emulate this behavior in bsd-user.
[[ applied Richard's suggestion ]]
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Warner Losh <imp@bsdimp.com>
Signed-off-by: Warner Losh <imp@bsdimp.com>
IEC descriptor is 128-bit invalidation descriptor, must be padded with
128-bits of 0s in the upper bytes to create a 256-bit descriptor when
the invalidation queue is configured for 256-bit descriptors (IQA_REG.DW=1).
Fixes: 02a2cbc872 ("x86-iommu: introduce IEC notifiers")
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Message-Id: <20241104125536.1236118-4-zhenzhong.duan@intel.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
According to VTD spec, a 256-bit descriptor will result in an invalid
descriptor error if submitted in an IQ that is setup to provide hardware
with 128-bit descriptors (IQA_REG.DW=0). Meanwhile, there are old inv desc
types (e.g. iotlb_inv_desc) that can be either 128bits or 256bits. If a
128-bit version of this descriptor is submitted into an IQ that is setup
to provide hardware with 256-bit descriptors will also result in an invalid
descriptor error.
The 2nd will be captured by the tail register update. So we only need to
focus on the 1st.
Because the reserved bit check between different types of invalidation desc
are common, so introduce a common function vtd_inv_desc_reserved_check()
to do all the checks and pass the differences as parameters.
With this change, need to replace error_report_once() call with error_report()
to catch different call sites. This isn't an issue as error_report_once()
here is mainly used to help debug guest error, but it only dumps once in
qemu life cycle and doesn't help much, we need error_report() instead.
Fixes: c0c1d35184 ("intel_iommu: add 256 bits qi_desc support")
Suggested-by: Yi Liu <yi.l.liu@intel.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Message-Id: <20241104125536.1236118-3-zhenzhong.duan@intel.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
According to VTD spec, Figure 11-22, Invalidation Queue Tail Register,
"When Descriptor Width (DW) field in Invalidation Queue Address Register
(IQA_REG) is Set (256-bit descriptors), hardware treats bit-4 as reserved
and a value of 1 in the bit will result in invalidation queue error."
Current code missed to send IQE event to guest, fix it.
Fixes: c0c1d35184 ("intel_iommu: add 256 bits qi_desc support")
Suggested-by: Yi Liu <yi.l.liu@intel.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Message-Id: <20241104125536.1236118-2-zhenzhong.duan@intel.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
The ACPI CPU hotplug states must be migrated along with other vCPU
hotplug states to the destination VM. Update the GED's VM State
Description (VMSD) table subsection to conditionally include the CPU
Hotplug VM State Description (VMSD).
Excerpt of GED VMSD State Dump at Source:
"acpi-ged (16)": {
"ged_state": {
"sel": "0x00000000"
},
[...]
"acpi-ged/cpuhp": {
"cpuhp_state": {
"selector": "0x00000005",
"command": "0x00",
"devs": [
{
"is_inserting": false,
"is_removing": false,
"ost_event": "0x00000000",
"ost_status": "0x00000000"
},
[...]
{
"is_inserting": false,
"is_removing": false,
"ost_event": "0x00000000",
"ost_status": "0x00000000"
}
]
}
}
},
Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Message-Id: <20241103102419.202225-6-salil.mehta@huawei.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Update DSDT golden master files for x86/pc and x86/q35 platforms to
accommodate changes made in the architecture-agnostic CPU AML. These
updates notify the guest OS of vCPU hot-plug and hot-unplug status
using the ACPI `_STA.Enabled` bit.
The following is a diff of the changes in the .dsl file generated with
IASL:
@@ -1480,6 +1480,7 @@
CRMV, 1,
CEJ0, 1,
CEJF, 1,
+ CPRS, 1,
Offset (0x05),
CCMD, 8
}
@@ -1514,9 +1515,16 @@
Acquire (\_SB.PCI0.PRES.CPLK, 0xFFFF)
\_SB.PCI0.PRES.CSEL = Arg0
Local0 = Zero
- If ((\_SB.PCI0.PRES.CPEN == One))
- {
- Local0 = 0x0F
+ If ((\_SB.PCI0.PRES.CPRS == One))
+ {
+ If ((\_SB.PCI0.PRES.CPEN == One))
+ {
+ Local0 = 0x0F
+ }
+ Else
+ {
+ Local0 = 0x0D
+ }
}
Release (\_SB.PCI0.PRES.CPLK)
Reported-by: Zhao Liu <zhao1.liu@intel.com>
Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reflect the QOM vCPUs ACPI CPU hotplug states in the `_STA.Present` and
and `_STA.Enabled` bits when the guest kernel evaluates the ACPI
`_STA` method during initialization, as well as when vCPUs are
hot-plugged or hot-unplugged. If the CPU is present then the its
`enabled` status can be fetched using architecture-specific code [1].
Reference:
[1] Example implementation of architecture-specific hook to fetch CPU
`enabled status
Link: c0b416b11e
Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Message-Id: <20241103102419.202225-4-salil.mehta@huawei.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
list changed files in tests/qtest/bios-tables-test-allowed-diff.h
Reported-by: Zhao Liu <zhao1.liu@intel.com>
Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Message-Id: <20241103102419.202225-3-salil.mehta@huawei.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
On most architectures, during vCPU hot-plug and hot-unplug actions, the
firmware or VMM/QEMU can update the OS on vCPU status by toggling the
ACPI method `_STA.Present` bit. However, certain CPU architectures
prohibit [1] modifications to a CPU’s `presence` status after the kernel
has booted.
This limitation [2][3] exists because many per-CPU components, such as
interrupt controllers and various per-CPU features tightly integrated
with CPUs, may not support reconfiguration once the kernel is
initialized. Often, these components cannot be powered down, as they may
belong to an `always-on` power domain. As a result, some architectures
require all CPUs to remain `_STA.Present` after system initialization.
Therefore, it is essential to mirror the exact QOM vCPU status through
ACPI for the Guest kernel. For this, we should determine—via
architecture-specific code[4]—whether vCPUs must always remain present
and whether the associated `AcpiCpuStatus::cpu` object should remain
valid, even following a vCPU hot-unplug operation.
References:
[1] Check comment 5 in the bugzilla entry
Link: https://bugzilla.tianocore.org/show_bug.cgi?id=4481#c5
[2] KVMForum 2023 Presentation: Challenges Revisited in Supporting Virt CPU Hotplug on
architectures that don’t Support CPU Hotplug (like ARM64)
a. Kernel Link: https://kvm-forum.qemu.org/2023/KVM-forum-cpu-hotplug_7OJ1YyJ.pdf
b. Qemu Link: https://kvm-forum.qemu.org/2023/Challenges_Revisited_in_Supporting_Virt_CPU_Hotplug_-__ii0iNb3.pdf
[3] KVMForum 2020 Presentation: Challenges in Supporting Virtual CPU Hotplug on
SoC Based Systems (like ARM64)
Link: https://kvmforum2020.sched.com/event/eE4m
[4] Example implementation of architecture-specific CPU persistence hook
Link: c0b416b11e
Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Message-Id: <20241103102419.202225-2-salil.mehta@huawei.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
The bus parameter in the macro PCI_BUILD_BDF is not surrounded by
parenthesis. This can create a compile error when warnings are
treated as errors or can potentially create runtime errors due to the
operator precedence.
For instance:
file.c❌32: error: suggest parentheses around '-' inside '<<'
[-Werror=parentheses]
171 | uint16_t bdf = PCI_BUILD_BDF(a - b, sdev->devfn);
| ~~^~~
include/hw/pci/pci.h:19:41: note: in definition of macro
'PCI_BUILD_BDF'
19 | #define PCI_BUILD_BDF(bus, devfn) ((bus << 8) | (devfn))
| ^~~
cc1: all warnings being treated as errors
Signed-off-by: Roque Arcudia Hernandez <roqueh@google.com>
Reviewed-by: Nabih Estefan <nabihestefan@google.com>
Message-Id: <20241101215923.3399311-1-roqueh@google.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
If len_in is smaller than the header length then the accessing the
number of ports will result in an out of bounds access.
Add a check to avoid this.
Reported-by: Esifiel <esifiel@gmail.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Message-Id: <20241101133917.27634-11-Jonathan.Cameron@huawei.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
The properties of the requested set command cannot be established if
len_in is less than the size of the header.
Reported-by: Esifiel <esifiel@gmail.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Message-Id: <20241101133917.27634-10-Jonathan.Cameron@huawei.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
In cmd_features_set_feature() the an offset + data size schemed
is used to allow for large features. Ensure this does not write
beyond the end fo the buffers used to accumulate the full feature
attribute set.
Reported-by: Esifiel <esifiel@gmail.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Message-Id: <20241101133917.27634-9-Jonathan.Cameron@huawei.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
If len_in is less than the minimum spec allowed value, then return
CXL_MBOX_INVALID_PAYLOAD_LENGTH
Reported-by: Esifiel <esifiel@gmail.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Message-Id: <20241101133917.27634-8-Jonathan.Cameron@huawei.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Add a check that the requested offset + length does not go beyond the end
of the cel_log.
Whilst the cci->cel_log is large enough to include all possible CEL
entries, the guest might still ask for entries beyond the end of it.
Move the comment to this new check rather than before the check on the
type of log requested.
Reported-by: Esifiel <esifiel@gmail.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Message-Id: <20241101133917.27634-7-Jonathan.Cameron@huawei.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>