Commit Graph

3806 Commits

Author SHA1 Message Date
Peter Maydell
173c427eb5 * Convert more Avocado tests to the new functional test framework
* Clean up assert() statements, use g_assert_not_reached() when possible
 * Improve output of the gitlab CI jobs
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmbz7xgRHHRodXRoQHJl
 ZGhhdC5jb20ACgkQLtnXdP5wLbWm6A//eVn+tzyyKCX/xdXlf7XyVpezvRpTFPOS
 HyO0WMkCf2kGmu6qYKx/fDZg86opdQzPLH2gPkuVrGOMZ0Z2630DjH0jNih8lL9Q
 J1oRX5YlU92chlzNmq59WB/j9CKd91ILtOoaPBuZkDob57yGEYVzCPqetVvF7L2+
 +rbnccrNPumGJFt035fxUGiGfgsmp28MHQzDwQdyr38uGjyNlqvqidfC8Vj1qzqP
 B7HvhGB/vkF0eHaanMt2el/ZuLKf+qeCi//F/CiXGMYnuKXyShA/Db6xvMElw1jB
 aQdwphP71IO+cxjJLaNjDHKGFstArsM/E21qlaSTBi+FTmPiwVULpVTiBmWsjhOh
 /klpdgRHf0hL2MciYKyOWgjlTocx3rEKjCTe2U5tpta9fp9CrlgMQotjDZIbohGI
 ULNahrW3Zmg4EmXDApfhYMXsQsSgWas9QSkmxzJzDp0VC7tf2Oq7RxeySrlw9MCx
 OG2qQY+rNcJ3NnpATjfAJpT1kg/IahDOCNHfLEaj1u13XVQIthVADvHwy5WxbwRP
 mwp3V9e9sUoznkM2eV646lzmkMim/WdYBF0YpT7eBs80+GoXZ0thx9IqWmwzX/ox
 rndBczVN+RY6PydJP40yljdvS7ArRT73wHqL6yKHfDpvFc4/p5mxTWwLQ3yJbXbE
 T3I+wtgfBU8=
 =FH7b
 -----END PGP SIGNATURE-----

Merge tag 'pull-request-2024-09-25' of https://gitlab.com/thuth/qemu into staging

* Convert more Avocado tests to the new functional test framework
* Clean up assert() statements, use g_assert_not_reached() when possible
* Improve output of the gitlab CI jobs

# -----BEGIN PGP SIGNATURE-----
#
# iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmbz7xgRHHRodXRoQHJl
# ZGhhdC5jb20ACgkQLtnXdP5wLbWm6A//eVn+tzyyKCX/xdXlf7XyVpezvRpTFPOS
# HyO0WMkCf2kGmu6qYKx/fDZg86opdQzPLH2gPkuVrGOMZ0Z2630DjH0jNih8lL9Q
# J1oRX5YlU92chlzNmq59WB/j9CKd91ILtOoaPBuZkDob57yGEYVzCPqetVvF7L2+
# +rbnccrNPumGJFt035fxUGiGfgsmp28MHQzDwQdyr38uGjyNlqvqidfC8Vj1qzqP
# B7HvhGB/vkF0eHaanMt2el/ZuLKf+qeCi//F/CiXGMYnuKXyShA/Db6xvMElw1jB
# aQdwphP71IO+cxjJLaNjDHKGFstArsM/E21qlaSTBi+FTmPiwVULpVTiBmWsjhOh
# /klpdgRHf0hL2MciYKyOWgjlTocx3rEKjCTe2U5tpta9fp9CrlgMQotjDZIbohGI
# ULNahrW3Zmg4EmXDApfhYMXsQsSgWas9QSkmxzJzDp0VC7tf2Oq7RxeySrlw9MCx
# OG2qQY+rNcJ3NnpATjfAJpT1kg/IahDOCNHfLEaj1u13XVQIthVADvHwy5WxbwRP
# mwp3V9e9sUoznkM2eV646lzmkMim/WdYBF0YpT7eBs80+GoXZ0thx9IqWmwzX/ox
# rndBczVN+RY6PydJP40yljdvS7ArRT73wHqL6yKHfDpvFc4/p5mxTWwLQ3yJbXbE
# T3I+wtgfBU8=
# =FH7b
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 25 Sep 2024 12:08:08 BST
# gpg:                using RSA key 27B88847EEE0250118F3EAB92ED9D774FE702DB5
# gpg:                issuer "thuth@redhat.com"
# gpg: Good signature from "Thomas Huth <th.huth@gmx.de>" [full]
# gpg:                 aka "Thomas Huth <thuth@redhat.com>" [full]
# gpg:                 aka "Thomas Huth <huth@tuxfamily.org>" [full]
# gpg:                 aka "Thomas Huth <th.huth@posteo.de>" [unknown]
# Primary key fingerprint: 27B8 8847 EEE0 2501 18F3  EAB9 2ED9 D774 FE70 2DB5

* tag 'pull-request-2024-09-25' of https://gitlab.com/thuth/qemu: (44 commits)
  .gitlab-ci.d: Make separate collapsible log sections for build and test
  .gitlab-ci.d: Split build and test in cross build job templates
  scripts/checkpatch.pl: emit error when using assert(false)
  tests/qtest: remove return after g_assert_not_reached()
  qom: remove return after g_assert_not_reached()
  qobject: remove return after g_assert_not_reached()
  migration: remove return after g_assert_not_reached()
  hw/ppc: remove return after g_assert_not_reached()
  hw/pci: remove return after g_assert_not_reached()
  hw/net: remove return after g_assert_not_reached()
  hw/hyperv: remove return after g_assert_not_reached()
  include/qemu: remove return after g_assert_not_reached()
  tcg/loongarch64: remove break after g_assert_not_reached()
  fpu: remove break after g_assert_not_reached()
  target/riscv: remove break after g_assert_not_reached()
  target/arm: remove break after g_assert_not_reached()
  hw/tpm: remove break after g_assert_not_reached()
  hw/scsi: remove break after g_assert_not_reached()
  hw/net: remove break after g_assert_not_reached()
  hw/acpi: remove break after g_assert_not_reached()
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-28 12:34:38 +01:00
Pierrick Bouvier
200e25b140 target/arm: remove break after g_assert_not_reached()
This patch is part of a series that moves towards a consistent use of
g_assert_not_reached() rather than an ad hoc mix of different
assertion mechanisms.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Message-ID: <20240919044641.386068-22-pierrick.bouvier@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2024-09-24 13:53:35 +02:00
Philippe Mathieu-Daudé
b14d064962 license: Update deprecated SPDX tag LGPL-2.0+ to LGPL-2.0-or-later
The 'LGPL-2.0+' license identifier has been deprecated since license
list version 2.0rc2 [1] and replaced by the 'LGPL-2.0-or-later' [2]
tag.

[1] https://spdx.org/licenses/LGPL-2.0+.html
[2] https://spdx.org/licenses/LGPL-2.0-or-later.html

Mechanical patch running:

  $ sed -i -e s/LGPL-2.0+/LGPL-2.0-or-later/ \
    $(git grep -l 'SPDX-License-Identifier: LGPL-2.0+$')

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-09-20 10:11:59 +03:00
Michael Tokarev
5691f4778e mark <zlib.h> with for-crc32 in a consistent manner
in many cases, <zlib.h> is only included for crc32 function,
and in some of them, there's a comment saying that, but in
a different way.  In one place (hw/net/rtl8139.c), there was
another #include added between the comment and <zlib.h> include.

Make all such comments to be on the same line as #include, make
it consistent, and also add a few missing comments, including
hw/nvram/mac_nvram.c which uses adler32 instead.

There's no code changes.

Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-09-20 08:06:56 +03:00
Peter Maydell
8676007eff target/arm: Correct ID_AA64ISAR1_EL1 value for neoverse-v1
The Neoverse-V1 TRM is a bit confused about the layout of the
ID_AA64ISAR1_EL1 register, and so its table 3-6 has the wrong value
for this ID register.  Trust instead section 3.2.74's list of which
fields are set.

This means that we stop incorrectly reporting FEAT_XS as present, and
now report the presence of FEAT_BF16.

Cc: qemu-stable@nongnu.org
Reported-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240917161337.3012188-1-peter.maydell@linaro.org
2024-09-19 13:17:21 +01:00
Richard Henderson
f21b07e272 target/arm: Convert scalar [US]QSHRN, [US]QRSHRN, SQSHRUN to decodetree
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-30-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19 12:58:58 +01:00
Richard Henderson
a3b6578f38 target/arm: Convert vector [US]QSHRN, [US]QRSHRN, SQSHRUN to decodetree
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-29-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19 12:58:58 +01:00
Richard Henderson
6e1ae741f9 target/arm: Convert SQSHL, UQSHL, SQSHLU (immediate) to decodetree
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-28-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19 12:58:58 +01:00
Richard Henderson
3e683f0a8c target/arm: Widen NeonGenNarrowEnvFn return to 64 bits
While these functions really do return a 32-bit value,
widening the return type means that we need do less
marshalling between TCG types.

Remove NeonGenNarrowEnvFn typedef; add NeonGenOne64OpEnvFn.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240912024114.1097832-27-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19 12:58:58 +01:00
Richard Henderson
ef2b80eb21 target/arm: Convert VQSHL, VQSHLU to gvec
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-26-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19 12:58:58 +01:00
Richard Henderson
7e5d5a3d8c target/arm: Convert handle_scalar_simd_shli to decodetree
This includes SHL and SLI.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-25-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19 12:58:58 +01:00
Richard Henderson
9c80de4884 target/arm: Convert handle_scalar_simd_shri to decodetree
This includes SSHR, USHR, SSRA, USRA, SRSHR, URSHR,
SRSRA, URSRA, SRI.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-24-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19 12:58:58 +01:00
Richard Henderson
fe5b8abe17 target/arm: Convert SHRN, RSHRN to decodetree
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-23-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19 12:58:57 +01:00
Richard Henderson
a597e55b7f target/arm: Split out subroutines of handle_shri_with_rndacc
There isn't a lot of commonality along the different paths of
handle_shri_with_rndacc.  Split them out to separate functions,
which will be usable during the decodetree conversion.

Simplify 64-bit rounding operations to not require double-word arithmetic.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-22-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19 12:58:57 +01:00
Richard Henderson
c6bc6966ad target/arm: Push tcg_rnd into handle_shri_with_rndacc
We always pass the same value for round; compute it
within common code.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-21-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19 12:58:57 +01:00
Richard Henderson
6ed32dd495 target/arm: Convert SSHLL, USHLL to decodetree
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-20-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19 12:58:57 +01:00
Richard Henderson
102f062e6e target/arm: Use {, s}extract in handle_vec_simd_wshli
Combine the right shift with the extension via
the tcg extract operations.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-19-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19 12:58:57 +01:00
Richard Henderson
583d69a746 target/arm: Convert handle_vec_simd_shli to decodetree
This includes SHL and SLI.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-18-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19 12:58:57 +01:00
Richard Henderson
6e74165564 target/arm: Convert handle_vec_simd_shri to decodetree
This includes SSHR, USHR, SSRA, USRA, SRSHR, URSHR, SRSRA, URSRA, SRI.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-17-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19 12:58:57 +01:00
Richard Henderson
da457c9356 target/arm: Fix whitespace near gen_srshr64_i64
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-16-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19 12:58:57 +01:00
Richard Henderson
00bcab5bad target/arm: Introduce gen_gvec_sshr, gen_gvec_ushr
Handle the two special cases within these new
functions instead of higher in the call stack.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19 12:58:57 +01:00
Richard Henderson
500928f242 target/arm: Convert MOVI, FMOV, ORR, BIC (vector immediate) to decodetree
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19 12:58:57 +01:00
Richard Henderson
c777e73cbe target/arm: Convert FMOVI (scalar, immediate) to decodetree
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19 12:58:57 +01:00
Richard Henderson
3d44e070a6 target/arm: Convert FMAXNMV, FMINNMV, FMAXV, FMINV to decodetree
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19 12:58:57 +01:00
Richard Henderson
cc7ece7216 target/arm: Convert ADDV, *ADDLV, *MAXV, *MINV to decodetree
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19 12:58:56 +01:00
Richard Henderson
d944e04961 target/arm: Simplify do_reduction_op
Use simple shift and add instead of ctpop, ctz, shift and mask.
Unlike SVE, there is no predicate to disable elements.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19 12:58:56 +01:00
Richard Henderson
a29e2c7d33 target/arm: Convert UZP, TRN, ZIP to decodetree
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19 12:58:56 +01:00
Richard Henderson
5dd7318f24 target/arm: Convert TBL, TBX to decodetree
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19 12:58:56 +01:00
Richard Henderson
9c8f7da04b target/arm: Convert EXT to decodetree
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19 12:58:56 +01:00
Richard Henderson
88f26451c9 target/arm: Use tcg_gen_extract2_i64 for EXT
The extract2 tcg op performs the same operation
as the do_ext64 function.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19 12:58:56 +01:00
Richard Henderson
ee36a772c0 target/arm: Use cmpsel in gen_sshl_vec
Instead of cmp+and or cmp+andc, use cmpsel.  This will
be better for hosts that use predicate registers for cmp.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19 12:58:56 +01:00
Richard Henderson
c17e35b893 target/arm: Use cmpsel in gen_ushl_vec
Instead of cmp+and or cmp+andc, use cmpsel.  This will
be better for hosts that use predicate registers for cmp.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19 12:58:56 +01:00
Richard Henderson
04e824eac9 target/arm: Replace tcg_gen_dupi_vec with constants in translate-sve.c
Instead of copying a constant into a temporary with dupi,
use a vector constant directly.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19 12:58:56 +01:00
Richard Henderson
143e179c84 target/arm: Replace tcg_gen_dupi_vec with constants in gengvec.c
Instead of copying a constant into a temporary with dupi,
use a vector constant directly.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-19 12:58:56 +01:00
Alireza Sanaee
676624d757 target/arm/tcg: refine cache descriptions with a wrapper
This patch allows for easier manipulation of the cache description
register, CCSIDR. Which is helpful for testing as well. Currently,
numbers get hard-coded and might be prone to errors.

Therefore, this patch adds a wrapper for different types of CPUs
available in tcg to decribe caches. One function `make_ccsidr` supports
two cases by carrying a parameter as FORMAT that can be LEGACY and
CCIDX which determines the specification of the register.

For CCSIDR register, 32 bit version follows specification [1].
Conversely, 64 bit version follows specification [2].

[1] B4.1.19, ARM Architecture Reference Manual ARMv7-A and ARMv7-R
edition, https://developer.arm.com/documentation/ddi0406
[2] D23.2.29, ARM Architecture Reference Manual for A-profile Architecture,
https://developer.arm.com/documentation/ddi0487/latest/

Signed-off-by: Alireza Sanaee <alireza.sanaee@huawei.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20240903144550.280-1-alireza.sanaee@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-13 15:31:47 +01:00
Danny Canter
d54ffa54fb hvf: arm: Implement and use hvf_get_physical_address_range
This patch's main focus is to use the previously added
hvf_get_physical_address_range to inform VM creation
about the IPA size we need for the VM, so we can extend
the default 36b IPA size and support VMs with 64+GB of
RAM. This is done by freezing the memory map, computing
the highest GPA and then (depending on if the platform
supports an IPA size that large) telling the kernel to
use a size >= for the VM. In pursuit of this a couple of
things related to how we handle the physical address range
we expose to guests were altered, but for an explanation of
what we were doing:

Today, to get the IPA size we were reading id_aa64mmfr0_el1's
PARange field from a newly made vcpu. Unfortunately, HVF just
returns the hosts PARange directly for the initial value and
not the IPA size that will actually back the VM, so we believe
we have much more address space than we actually do today it seems.

Starting in macOS 13.0 some APIs were introduced to be able to
query the maximum IPA size the kernel supports, and to set the IPA
size for a given VM. However, this still has a couple of issues
on < macOS 15. Up until macOS 15 (and if the hardware supported
it) the max IPA size was 39 bits which is not a valid PARange
value, so we can't clamp down what we advertise in the vcpu's
id_aa64mmfr0_el1 to our IPA size. Starting in macOS 15 however,
the maximum IPA size is 40 bits (if it's supported in the hardware
as well) which is also a valid PARange value so we can set our IPA
size to the maximum as well as clamp down the PARange we advertise
to the guest. This allows VMs with 64+ GB of RAM and should fix the
oddness of the PARange situation as well.

Signed-off-by: Danny Canter <danny_canter@apple.com>
Message-id: 20240828111552.93482-4-danny_canter@apple.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-13 15:31:47 +01:00
Danny Canter
2c760670af hvf: Split up hv_vm_create logic per arch
This is preliminary work to split up hv_vm_create
logic per platform so we can support creating VMs
with > 64GB of RAM on Apple Silicon machines. This
is done via ARM HVF's hv_vm_config_create() (and
other APIs that modify this config that will be
coming in future patches). This should have no
behavioral difference at all as hv_vm_config_create()
just assigns the same default values as if you just
passed NULL to the function.

Signed-off-by: Danny Canter <danny_canter@apple.com>
Message-id: 20240828111552.93482-3-danny_canter@apple.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-09-13 15:31:46 +01:00
Gustavo Romero
0298229ad6 gdbstub: Add support for MTE in system mode
This commit makes handle_q_memtag, handle_q_isaddresstagged, and
handle_Q_memtag stubs build for system mode, allowing all GDB
'memory-tag' subcommands to work with QEMU gdbstub on aarch64 system
mode.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/620
Signed-off-by: Gustavo Romero <gustavo.romero@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240906143316.657436-3-gustavo.romero@linaro.org>
[AJB: add #ifdef CONFIG_TCG guards]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20240910173900.4154726-8-alex.bennee@linaro.org>
2024-09-10 23:33:51 +01:00
Gustavo Romero
f611060531 gdbstub: Use specific MMU index when probing MTE addresses
Use cpu_mmu_index() to determine the specific translation regime (MMU
index) before probing addresses using allocation_tag_mem_probe().

Currently, the MMU index is hardcoded to 0 and only works for user mode.
By obtaining the specific MMU index according to the translation regime,
future use of the stubs relying on allocation_tag_mem_probe in other
regimes will be possible, like in EL1.

This commit also changes the ptr_size value passed to
allocation_tag_mem_probe() from 8 to 1. The ptr_size parameter actually
represents the number of bytes in the memory access (which can be as
small as 1 byte), rather than the number of bits used in the address
space pointed to by ptr.

Signed-off-by: Gustavo Romero <gustavo.romero@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20240906143316.657436-2-gustavo.romero@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20240910173900.4154726-7-alex.bennee@linaro.org>
2024-09-10 23:33:48 +01:00
Peter Maydell
76dd36660b target/arm: Correct names of VFP VFNMA and VFNMS insns
In vfp.decode we have the names of the VFNMA and VFNMS instructions
the wrong way around.  The architecture says that bit 6 is the 'op'
bit, which is 1 for VFNMA and 0 for VFNMS, but we label these two
lines of decode the other way around.  This doesn't cause any
user-visible problem because in the handling of these functions in
translate-vfp.c we give VFNMA the behaviour specified for VFNMS and
vice-versa, but it's confusing when reading the code.

Switch the names of the VFP VFNMA and VFNMS instructions in
the decode file and flip the behaviour also.

NB: the instructions VFMA and VFMS *are* decoded with op=0 for
VFMA and op=1 for VFMS; the confusion probably arose because
we assumed VFNMA and VFNMS to be the same way around.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2536
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240830152156.2046590-1-peter.maydell@linaro.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2024-09-05 13:12:37 +01:00
Peter Maydell
5d1187b308 target/arm: Enable FEAT_EBF16 in the "max" CPU
Now that we've implemented the required behaviour for FEAT_EBF16, we
can enable it for the "max" CPU type, list it in our documentation,
and delete a TODO comment about it being missing.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2024-09-05 13:12:36 +01:00
Peter Maydell
0e1850182a target/arm: Implement FPCR.EBF=1 semantics for bfdotadd()
Implement the FPCR.EBF=1 semantics for bfdotadd() operations:
 * is_ebf() sets up fpst and fpst_odd
 * bfdotadd_ebf() implements the fused paired-multiply-and-add
   operation that we need

The paired-multiply-and-add is similar to f16_dotadd() and
we use the same trick here as in that function, but the inputs
here are bfloat16 rather than float16.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2024-09-05 13:12:36 +01:00
Peter Maydell
09b0d9e0ad target/arm: Prepare bfdotadd() callers for FEAT_EBF support
We use bfdotadd() in four callsites for various helper functions. Currently
this all assumes that we have the FPCR.EBF=0 semantics. For FPCR.EBF=1
we will need to:
 * call a different routine to bfdotadd() because we need to do a
   fused multiply-add rather than separate multiply and add steps
 * use a different float_status that honours the FPCR rounding mode
   and denormal-flushing fields
 * pass in an extra float_status that has been set up to perform
   round-to-odd rounding

To prepare for this, refactor all the callsites so that instead of
   for (...) {
       x = bfdotadd(...);
   }

they are:
   float_status fpst, fpst_odd;
   if (is_ebf(env, &fpst, &fpst_odd)) {
       for (...) {
           x = bfdotadd_ebf(..., &fpst, &fpst_odd);
       }
   } else {
       for (...) {
           x = bfdotadd(..., &fpst);
       }
   }

For the moment the is_ebf() function always returns false, sets up
fpst for EBF=0 semantics and never sets up fpst_odd; bfdotadd_ebf()
will assert if called. We'll fill in the handling for EBF=1 in the
next commit.

This change should be a zero-behaviour-change refactor.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2024-09-05 13:12:36 +01:00
Peter Maydell
2da2d7dc90 target/arm: Pass env pointer through to gvec_bfmmla helper
Pass the env pointer through to the gvec_bfmmla helper,
so we can use it to add support for FEAT_EBF16.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2024-09-05 13:12:36 +01:00
Peter Maydell
c8d644b951 target/arm: Pass env pointer through to gvec_bfdot_idx helper
Pass the env pointer through to the gvec_bfdot_idx helper,
so we can use it to add support for FEAT_EBF16.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2024-09-05 13:12:35 +01:00
Peter Maydell
75a6784dad target/arm: Pass env pointer through to gvec_bfdot helper
Pass the env pointer through to the gvec_bfdot helper,
so we can use it to add support for FEAT_EBF16.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2024-09-05 13:12:35 +01:00
Peter Maydell
ecabcfa47c target/arm: Pass env pointer through to sme_bfmopa helper
To implement the FEAT_EBF16 semantics, we are going to need
the CPUARMState env pointer in every helper function which calls
bfdotadd().

Pass the env pointer through from generated code to the sme_bfmopa
helper. (We'll add the code that uses it when we've adjusted
all the helpers to have access to the env pointer.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2024-09-05 13:12:35 +01:00
Peter Maydell
8b0898f8dd target/arm: Allow setting the FPCR.EBF bit for FEAT_EBF16
FEAT_EBF16 adds one new bit to the FPCR floating point control
register.  Allow this bit to be read and written when the ID
registers indicate the presence of the feature.

Note that because this new bit is not in FPSCR_FPCR_MASK the bit is
not visible in the AArch32 FPSCR, and FPSCR writes do not affect it.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2024-09-05 13:12:35 +01:00
Peter Maydell
4c2c047469 target/arm: Fix usage of MMU indexes when EL3 is AArch32
Our current usage of MMU indexes when EL3 is AArch32 is confused.
Architecturally, when EL3 is AArch32, all Secure code runs under the
Secure PL1&0 translation regime:
 * code at EL3, which might be Mon, or SVC, or any of the
   other privileged modes (PL1)
 * code at EL0 (Secure PL0)

This is different from when EL3 is AArch64, in which case EL3 is its
own translation regime, and EL1 and EL0 (whether AArch32 or AArch64)
have their own regime.

We claimed to be mapping Secure PL1 to our ARMMMUIdx_EL3, but didn't
do anything special about Secure PL0, which meant it used the same
ARMMMUIdx_EL10_0 that NonSecure PL0 does.  This resulted in a bug
where arm_sctlr() incorrectly picked the NonSecure SCTLR as the
controlling register when in Secure PL0, which meant we were
spuriously generating alignment faults because we were looking at the
wrong SCTLR control bits.

The use of ARMMMUIdx_EL3 for Secure PL1 also resulted in the bug that
we wouldn't honour the PAN bit for Secure PL1, because there's no
equivalent _PAN mmu index for it.

We could fix this in one of two ways:
 * The most straightforward is to add new MMU indexes EL30_0,
   EL30_3, EL30_3_PAN to correspond to "Secure PL1&0 at PL0",
   "Secure PL1&0 at PL1", and "Secure PL1&0 at PL1 with PAN".
   This matches how we use indexes for the AArch64 regimes, and
   preserves propirties like being able to determine the privilege
   level from an MMU index without any other information. However
   it would add two MMU indexes (we can share one with ARMMMUIdx_EL3),
   and we are already using 14 of the 16 the core TLB code permits.

 * The more complicated approach is the one we take here. We use
   the same MMU indexes (E10_0, E10_1, E10_1_PAN) for Secure PL1&0
   than we do for NonSecure PL1&0. This saves on MMU indexes, but
   means we need to check in some places whether we're in the
   Secure PL1&0 regime or not before we interpret an MMU index.

The changes in this commit were created by auditing all the places
where we use specific ARMMMUIdx_ values, and checking whether they
needed to be changed to handle the new index value usage.

Note for potential stable backports: taking also the previous
(comment-change-only) commit might make the backport easier.

Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2326
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Bernhard Beschow <shentey@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240809160430.1144805-3-peter.maydell@linaro.org
2024-08-13 11:44:53 +01:00
Peter Maydell
150c24f34e target/arm: Update translation regime comment for new features
We have a long comment describing the Arm architectural translation
regimes and how we map them to QEMU MMU indexes.  This comment has
got a bit out of date:

 * FEAT_SEL2 allows Secure EL2 and corresponding new regimes
 * FEAT_RME introduces Realm state and its translation regimes
 * We now model the Cortex-R52 so that is no longer a hypothetical
 * We separated Secure Stage 2 and NonSecure Stage 2 MMU indexes
 * We have an MMU index per physical address spacea

Add the missing pieces so that the list of architectural translation
regimes matches the Arm ARM, and the list and count of QEMU MMU
indexes in the comment matches the enum.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Bernhard Beschow <shentey@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240809160430.1144805-2-peter.maydell@linaro.org
2024-08-13 11:44:53 +01:00