Commit Graph

3565 Commits

Author SHA1 Message Date
Peter Maydell
dfff1000fe target/arm: Implement Neoverse N2 CPU model
Implement a model of the Neoverse N2 CPU. This is an Armv9.0-A
processor very similar to the Cortex-A710. The differences are:
 * no FEAT_EVT
 * FEAT_DGH (data gathering hint)
 * FEAT_NV (not yet implemented in QEMU)
 * Statistical Profiling Extension (not implemented in QEMU)
 * 48 bit physical address range, not 40
 * CTR_EL0.DIC = 1 (no explicit icache cleaning needed)
 * PMCR_EL0.N = 6 (always 6 PMU counters, not 20)

Because it has 48-bit physical address support, we can use
this CPU in the sbsa-ref board as well as the virt board.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20230915185453.1871167-3-peter.maydell@linaro.org
2023-10-27 11:41:13 +01:00
Peter Maydell
3bcc53980b target/arm: Correct minor errors in Cortex-A710 definition
Correct a couple of minor errors in the Cortex-A710 definition:
 * ID_AA64DFR0_EL1.DebugVer is 9 (indicating Armv8.4 debug architecture)
 * ID_AA64ISAR1_EL1.APA is 5 (indicating more PAuth support)
 * there is an IMPDEF CPUCFR_EL1, like that on the Neoverse-N1

Fixes: e3d45c0a89 ("target/arm: Implement cortex-a710")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20230915185453.1871167-2-peter.maydell@linaro.org
2023-10-27 11:41:13 +01:00
Richard Henderson
2f02c14b21 target/arm: Use tcg_gen_ext_i64
The ext_and_shift_reg helper does this plus a shift.
The non-zero check for shift count is duplicate to
the one done within tcg_gen_shli_i64.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-10-22 16:35:43 -07:00
Peter Maydell
3a45f4f537 target/arm/arm-powerctl: Correctly init CPUs when powered on to lower EL
The code for powering on a CPU in arm-powerctl.c has two separate
use cases:
 * emulation of a real hardware power controller
 * emulation of firmware interfaces (primarily PSCI) with
   CPU on/off APIs

For the first case, we only need to reset the CPU and set its
starting PC and X0.  For the second case, because we're emulating the
firmware we need to ensure that it's in the state that the firmware
provides.  In particular, when we reset to a lower EL than the
highest one we are emulating, we need to put the CPU into a state
that permits correct running at that lower EL.  We already do a
little of this in arm-powerctl.c (for instance we set SCR_HCE to
enable the HVC insn) but we don't do enough of it.  This means that
in the case where we are emulating EL3 but also providing emulated
PSCI the guest will crash when a secondary core tries to use a
feature that needs an SCR_EL3 bit to be set, such as MTE or PAuth.

The hw/arm/boot.c code also has to support this "start guest code in
an EL that's lower than the highest emulated EL" case in order to do
direct guest kernel booting; it has all the necessary initialization
code to set the SCR_EL3 bits.  Pull the relevant boot.c code out into
a separate function so we can share it between there and
arm-powerctl.c.

This refactoring has a few code changes that look like they
might be behaviour changes but aren't:
 * if info->secure_boot is false and info->secure_board_setup is
   true, then the old code would start the first CPU in Hyp
   mode but without changing SCR.NS and NSACR.{CP11,CP10}.
   This was wrong behaviour because there's no such thing
   as Secure Hyp mode. The new code will leave the CPU in SVC.
   (There is no board which sets secure_boot to false and
   secure_board_setup to true, so this isn't a behaviour
   change for any of our boards.)
 * we don't explicitly clear SCR.NS when arm-powerctl.c
   does a CPU-on to EL3. This was a no-op because CPU reset
   will reset to NS == 0.

And some real behaviour changes:
 * we no longer set HCR_EL2.RW when booting into EL2: the guest
   can and should do that themselves before dropping into their
   EL1 code. (arm-powerctl and boot did this differently; I
   opted to use the logic from arm-powerctl, which only sets
   HCR_EL2.RW when it's directly starting the guest in EL1,
   because it's more correct, and I don't expect guests to be
   accidentally depending on our having set the RW bit for them.)
 * if we are booting a CPU into AArch32 Secure SVC then we won't
   set SCR.HCE any more. This affects only the vexpress-a15 and
   raspi2b machine types. Guests booting in this case will either:
    - be able to set SCR.HCE themselves as part of moving from
      Secure SVC into NS Hyp mode
    - will move from Secure SVC to NS SVC, and won't care about
      behaviour of the HVC insn
    - will stay in Secure SVC, and won't care about HVC
 * on an arm-powerctl CPU-on we will now set the SCR bits for
   pauth/mte/sve/sme/hcx/fgt features

The first two of these are very minor and I don't expect guest
code to trip over them, so I didn't judge it worth convoluting
the code in an attempt to keep exactly the same boot.c behaviour.
The third change fixes issue 1899.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1899
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230926155619.4028618-1-peter.maydell@linaro.org
2023-10-19 14:32:13 +01:00
Peter Maydell
30722e0445 target/arm/common-semi-target.h: Remove unnecessary boot.h include
The hw/arm/boot.h include in common-semi-target.h is not actually
needed, and it's a bit odd because it pulls a hw/arm header into a
target/arm file.

This include was originally needed because the semihosting code used
the arm_boot_info struct to get the base address of the RAM in system
emulation, to use in a (bad) heuristic for the return values for the
SYS_HEAPINFO semihosting call.  We've since overhauled how we
calculate the HEAPINFO values in system emulation, and the code no
longer uses the arm_boot_info struct.

Remove the now-redundant include line, and instead directly include
the cpu-qom.h header that we were previously getting via boot.h.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230925112219.3919261-1-peter.maydell@linaro.org
2023-10-19 14:32:13 +01:00
Peter Maydell
4fd79a96ea target/arm/kvm64.c: Remove unused include
The include of hw/arm/virt.h in kvm64.c is unnecessary and also a
layering violation since the generic KVM code shouldn't need to know
anything about board-specifics.  The include line is an accidental
leftover from commit 15613357ba, where we cleaned up the code
to not depend on virt board internals but forgot to also remove the
now-redundant include line.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20230925110429.3917202-1-peter.maydell@linaro.org
2023-10-19 14:32:13 +01:00
Peter Maydell
3d80bbf1f6 target/arm: Implement FEAT_HPMN0
FEAT_HPMN0 is a small feature which defines that it is valid for
MDCR_EL2.HPMN to be set to 0, meaning "no PMU event counters provided
to an EL1 guest" (previously this setting was reserved). QEMU's
implementation almost gets HPMN == 0 right, but we need to fix
one check in pmevcntr_is_64_bit(). That is enough for us to
advertise the feature in the 'max' CPU.

(We don't need to make the behaviour conditional on feature
presence, because the FEAT_HPMN0 behaviour is within the range
of permitted UNPREDICTABLE behaviour for a non-FEAT_HPMN0
implementation.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230921185445.3339214-1-peter.maydell@linaro.org
2023-10-19 14:32:13 +01:00
Peter Maydell
a530e470ea target/arm: Permit T32 LDM with single register
For the Thumb T32 encoding of LDM, if only a single register is
specified in the register list this instruction is UNPREDICTABLE,
with the following choices:
 * instruction UNDEFs
 * instruction is a NOP
 * instruction loads a single register
 * instruction loads an unspecified set of registers

Currently we choose to UNDEF (a behaviour chosen in commit
4b222545db in 2019; previously we treated it as "load the
specified single register").

Unfortunately there is real world code out there (which shipped in at
least Android 11, 12 and 13) which incorrectly uses this
UNPREDICTABLE insn on the assumption that it does a single register
load, which is (presumably) what it happens to do on real hardware,
and is also what it does on the equivalent A32 encoding.

Revert to the pre-4b222545dbf30 behaviour of not UNDEFing
for this T32 encoding.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1799
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230927101853.39288-1-peter.maydell@linaro.org
2023-10-19 14:32:13 +01:00
Cornelia Huck
40d45b85e0 arm/kvm: convert to kvm_get_one_reg
We can neaten the code by switching the callers that work on a
CPUstate to the kvm_get_one_reg function.

Reviewed-by: Gavin Shan <gshan@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20231010142453.224369-3-cohuck@redhat.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-10-19 14:32:13 +01:00
Cornelia Huck
6c8b9a74bf arm/kvm: convert to kvm_set_one_reg
We can neaten the code by switching to the kvm_set_one_reg function.

Reviewed-by: Gavin Shan <gshan@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20231010142453.224369-2-cohuck@redhat.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-10-19 14:32:13 +01:00
Michal Orzel
d01448c79d target/arm: Fix CNTPCT_EL0 trapping from EL0 when HCR_EL2.E2H is 0
On an attempt to access CNTPCT_EL0 from EL0 using a guest running on top
of Xen, a trap from EL2 was observed which is something not reproducible
on HW (also, Xen does not trap accesses to physical counter).

This is because gt_counter_access() checks for an incorrect bit (1
instead of 0) of CNTHCTL_EL2 if HCR_EL2.E2H is 0 and access is made to
physical counter. Refer ARM ARM DDI 0487J.a, D19.12.2:
When HCR_EL2.E2H is 0:
 - EL1PCTEN, bit [0]: refers to physical counter
 - EL1PCEN, bit [1]: refers to physical timer registers

Drop entire block "if (hcr & HCR_E2H) {...} else {...}" from EL0 case
and fall through to EL1 case, given that after fixing checking for the
correct bit, the handling is the same.

Fixes: 5bc8437136 ("target/arm: Update timer access for VHE")
Signed-off-by: Michal Orzel <michal.orzel@amd.com>
Tested-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Message-id: 20230928094404.20802-1-michal.orzel@amd.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-10-19 14:32:12 +01:00
Philippe Mathieu-Daudé
85c90d45f6 hw/arm/exynos4210: Get arm_boot_info declaration from 'hw/arm/boot.h'
struct arm_boot_info is declared in "hw/arm/boot.h".
By including the correct header we don't need to declare
it again in "target/arm/cpu-qom.h".

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20231013130214.95742-1-philmd@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-10-19 13:01:52 +01:00
Akihiko Odaki
dd2f7e2974 target/arm: Remove references to gdb_has_xml
GDB has XML support since 6.7 which was released in 2007.
It's time to remove support for old GDB versions without XML support.

Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Acked-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230912224107.29669-10-akihiko.odaki@daynix.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20231009164104.369749-17-alex.bennee@linaro.org>
2023-10-11 08:46:33 +01:00
Akihiko Odaki
a650683871 hw/core/cpu: Return static value with gdb_arch_name()
All implementations of gdb_arch_name() returns dynamic duplicates of
static strings. It's also unlikely that there will be an implementation
of gdb_arch_name() that returns a truly dynamic value due to the nature
of the function returning a well-known identifiers. Qualify the value
gdb_arch_name() with const and make all of its implementations return
static strings.

Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230912224107.29669-8-akihiko.odaki@daynix.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20231009164104.369749-15-alex.bennee@linaro.org>
2023-10-11 08:46:33 +01:00
Akihiko Odaki
48de646280 target/arm: Move the reference to arm-core.xml
Some subclasses overwrite gdb_core_xml_file member but others don't.
Always initialize the member in the subclasses for consistency.

This especially helps for AArch64; in a following change, the file
specified by gdb_core_xml_file is always looked up even if it's going to
be overwritten later. Looking up arm-core.xml results in an error as
it will not be embedded in the AArch64 build.

Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230912224107.29669-7-akihiko.odaki@daynix.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20231009164104.369749-14-alex.bennee@linaro.org>
2023-10-11 08:46:33 +01:00
Philippe Mathieu-Daudé
01c85e60a4 meson: Rename target_softmmu_arch -> target_system_arch
Finish the convertion started with commit de6cd7599b
("meson: Replace softmmu_ss -> system_ss"). If the
$target_type is 'system', then use the target_system_arch[]
source set :)

Mechanical change doing:

  $ sed -i -e s/target_softmmu_arch/target_system_arch/g \
      $(git grep -l target_softmmu_arch)

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20231004090629.37473-13-philmd@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-10-07 19:03:07 +02:00
Richard Henderson
8fa08d7ec7 accel/tcg: Remove cpu_set_cpustate_pointers
This function is now empty, so remove it.  In the case of
m68k and tricore, this empties the class instance initfn,
so remove those as well.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-10-04 11:03:54 -07:00
Richard Henderson
b77af26e97 accel/tcg: Replace CPUState.env_ptr with cpu_env()
Reviewed-by: Anton Johansson <anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-10-04 11:03:54 -07:00
Richard Henderson
ad75a51e84 tcg: Rename cpu_env to tcg_env
Allow the name 'cpu_env' to be used for something else.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-10-03 08:01:02 -07:00
Richard Henderson
3b3d7df545 accel/tcg: Move CPUNegativeOffsetState into CPUState
Retain the separate structure to emphasize its importance.
Enforce CPUArchState always follows CPUState without padding.

Reviewed-by: Anton Johansson <anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-10-03 08:01:02 -07:00
Richard Henderson
61cd357698 target/arm: Remove size and alignment for cpu subclasses
Inherit the size and alignment from TYPE_ARM_CPU.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-10-03 08:01:02 -07:00
Anton Johansson
a81fef4b64 target/arm: Replace TARGET_PAGE_ENTRY_EXTRA
TARGET_PAGE_ENTRY_EXTRA is a macro that allows guests to specify additional
fields for caching with the full TLB entry.  This macro is replaced with
a union in CPUTLBEntryFull, thus making CPUTLB target-agnostic at the
cost of slightly inflated CPUTLBEntryFull for non-arm guests.

Note, this is needed to ensure that fields in CPUTLB don't vary in
offset between various targets.

(arm is the only guest actually making use of this feature.)

Signed-off-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230912153428.17816-2-anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-10-03 08:01:02 -07:00
Philippe Mathieu-Daudé
5a3d2c3562 target/arm/hvf: Clean up local variable shadowing
Per Peter Maydell analysis [*]:

  The hvf_vcpu_exec() function is not documented, but in practice
  its caller expects it to return either EXCP_DEBUG (for "this was
  a guest debug exception you need to deal with") or something else
  (presumably the intention being 0 for OK).

  The hvf_sysreg_read() and hvf_sysreg_write() functions are also not
  documented, but they return 0 on success, or 1 for a completely
  unrecognized sysreg where we've raised the UNDEF exception (but
  not if we raised an UNDEF exception for an unrecognized GIC sysreg --
  I think this is a bug). We use this return value to decide whether
  we need to advance the PC past the insn or not. It's not the same
  as the return value we want to return from hvf_vcpu_exec().

  Retain the variable as locally scoped but give it a name that
  doesn't clash with the other function-scoped variable.

This fixes:

  target/arm/hvf/hvf.c:1936:13: error: declaration shadows a local variable [-Werror,-Wshadow]
        int ret = 0;
            ^
  target/arm/hvf/hvf.c:1807:9: note: previous declaration is here
    int ret;
        ^
[*] https://lore.kernel.org/qemu-devel/CAFEAcA_e+fU6JKtS+W63wr9cCJ6btu_hT_ydZWOwC0kBkDYYYQ@mail.gmail.com/

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20230904161235.84651-4-philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
2023-09-29 10:07:14 +02:00
Philippe Mathieu-Daudé
d54deb2a07 target/arm/tcg: Clean up local variable shadowing
Fix:

  target/arm/tcg/translate-m-nocp.c: In function ‘gen_M_fp_sysreg_read’:
  target/arm/tcg/translate-m-nocp.c:509:18: warning: declaration of ‘tmp’ shadows a previous local [-Wshadow=compatible-local]
    509 |         TCGv_i32 tmp = load_cpu_field(v7m.fpdscr[M_REG_NS]);
        |                  ^~~
  target/arm/tcg/translate-m-nocp.c:433:14: note: shadowed declaration is here
    433 |     TCGv_i32 tmp;
        |              ^~~
       ---

  target/arm/tcg/mve_helper.c: In function ‘helper_mve_vqshlsb’:
  target/arm/tcg/mve_helper.c:1259:19: warning: declaration of ‘r’ shadows a previous local [-Wshadow=compatible-local]
   1259 |         typeof(N) r = FN(N, (int8_t)(M), sizeof(N) * 8, ROUND, &su32);  \
        |                   ^
  target/arm/tcg/mve_helper.c:1267:5: note: in expansion of macro ‘WRAP_QRSHL_HELPER’
   1267 |     WRAP_QRSHL_HELPER(do_sqrshl_bhs, N, M, false, satp)
        |     ^~~~~~~~~~~~~~~~~
  target/arm/tcg/mve_helper.c:927:22: note: in expansion of macro ‘DO_SQSHL_OP’
    927 |             TYPE r = FN(n[H##ESIZE(e)], m[H##ESIZE(e)], &sat);          \
        |                      ^~
  target/arm/tcg/mve_helper.c:945:5: note: in expansion of macro ‘DO_2OP_SAT’
    945 |     DO_2OP_SAT(OP##b, 1, int8_t, FN)            \
        |     ^~~~~~~~~~
  target/arm/tcg/mve_helper.c:1277:1: note: in expansion of macro ‘DO_2OP_SAT_S’
   1277 | DO_2OP_SAT_S(vqshls, DO_SQSHL_OP)
        | ^~~~~~~~~~~~
       ---

  target/arm/tcg/mve_helper.c: In function ‘do_sqrshl48_d’:
  target/arm/tcg/mve_helper.c:2463:17: warning: declaration of ‘extval’ shadows a previous local [-Wshadow=compatible-local]
   2463 |         int64_t extval = sextract64(src << shift, 0, 48);
        |                 ^~~~~~
  target/arm/tcg/mve_helper.c:2443:18: note: shadowed declaration is here
   2443 |     int64_t val, extval;
        |                  ^~~~~~
       ---

  target/arm/tcg/mve_helper.c: In function ‘do_uqrshl48_d’:
  target/arm/tcg/mve_helper.c:2495:18: warning: declaration of ‘extval’ shadows a previous local [-Wshadow=compatible-local]
   2495 |         uint64_t extval = extract64(src << shift, 0, 48);
        |                  ^~~~~~
  target/arm/tcg/mve_helper.c:2479:19: note: shadowed declaration is here
   2479 |     uint64_t val, extval;
        |                   ^~~~~~

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20230904161235.84651-3-philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
2023-09-29 10:07:14 +02:00
Peter Maydell
706a92fbfa target/arm: Enable FEAT_MOPS for CPU 'max'
Enable FEAT_MOPS on the AArch64 'max' CPU, and add it to
the list of features we implement.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230912140434.1333369-13-peter.maydell@linaro.org
2023-09-21 16:07:14 +01:00
Peter Maydell
5d7b37b5f6 target/arm: Implement the CPY* instructions
The FEAT_MOPS CPY* instructions implement memory copies. These
come in both "always forwards" (memcpy-style) and "overlap OK"
(memmove-style) flavours.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230912140434.1333369-12-peter.maydell@linaro.org
2023-09-21 16:07:14 +01:00
Peter Maydell
69c51dc372 target/arm: Implement MTE tag-checking functions for FEAT_MOPS copies
The FEAT_MOPS memory copy operations need an extra helper routine
for checking for MTE tag checking failures beyond the ones we
already added for memory set operations:
 * mte_mops_probe_rev() does the same job as mte_mops_probe(), but
   it checks tags starting at the provided address and working
   backwards, rather than forwards

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230912140434.1333369-11-peter.maydell@linaro.org
2023-09-21 16:07:14 +01:00
Peter Maydell
6087df5744 target/arm: Implement the SETG* instructions
The FEAT_MOPS SETG* instructions are very similar to the SET*
instructions, but as well as setting memory contents they also
set the MTE tags. They are architecturally required to operate
on tag-granule aligned regions only.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230912140434.1333369-10-peter.maydell@linaro.org
2023-09-21 16:07:14 +01:00
Peter Maydell
179e9a3bac target/arm: Define new TB flag for ATA0
Currently the only tag-setting instructions always do so in the
context of the current EL, and so we only need one ATA bit in the TB
flags.  The FEAT_MOPS SETG instructions include ones which set tags
for a non-privileged access, so we now also need the equivalent "are
tags enabled?" information for EL0.

Add the new TB flag, and convert the existing 'bool ata' field in
DisasContext to a 'bool ata[2]' that can be indexed by the is_unpriv
bit in an instruction, similarly to mte[2].

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230912140434.1333369-9-peter.maydell@linaro.org
2023-09-21 16:07:14 +01:00
Peter Maydell
0e92818887 target/arm: Implement the SET* instructions
Implement the SET* instructions which collectively implement a
"memset" operation.  These come in a set of three, eg SETP
(prologue), SETM (main), SETE (epilogue), and each of those has
different flavours to indicate whether memory accesses should be
unpriv or non-temporal.

This commit does not include the "memset with tag setting"
SETG* instructions.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230912140434.1333369-8-peter.maydell@linaro.org
2023-09-21 16:07:14 +01:00
Peter Maydell
8163998920 target/arm: Implement MTE tag-checking functions for FEAT_MOPS
The FEAT_MOPS instructions need a couple of helper routines that
check for MTE tag failures:
 * mte_mops_probe() checks whether there is going to be a tag
   error in the next up-to-a-page worth of data
 * mte_check_fail() is an existing function to record the fact
   of a tag failure, which we need to make global so we can
   call it from helper-a64.c

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230912140434.1333369-7-peter.maydell@linaro.org
2023-09-21 16:07:14 +01:00
Peter Maydell
aa03378bcc target/arm: New function allocation_tag_mem_probe()
For the FEAT_MOPS operations, the existing allocation_tag_mem()
function almost does what we want, but it will take a watchpoint
exception even for an ra == 0 probe request, and it requires that the
caller guarantee that the memory is accessible.  For FEAT_MOPS we
want a function that will not take any kind of exception, and will
return NULL for the not-accessible case.

Rename allocation_tag_mem() to allocation_tag_mem_probe() and add an
extra 'probe' argument that lets us distinguish these cases;
allocation_tag_mem() is now a wrapper that always passes 'false'.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230912140434.1333369-6-peter.maydell@linaro.org
2023-09-21 16:07:14 +01:00
Peter Maydell
31aaaddecb target/arm: Define syndrome function for MOPS exceptions
The FEAT_MOPS memory operations can raise a Memory Copy or Memory Set
exception if a copy or set instruction is executed when the CPU
register state is not correct for that instruction. Define the
usual syn_* function that constructs the syndrome register value
for these exceptions.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230912140434.1333369-5-peter.maydell@linaro.org
2023-09-21 16:07:14 +01:00
Peter Maydell
81466e4bad target/arm: Pass unpriv bool to get_a64_user_mem_index()
In every place that we call the get_a64_user_mem_index() function
we do it like this:
 memidx = a->unpriv ? get_a64_user_mem_index(s) : get_mem_index(s);
Refactor so the caller passes in the bool that says whether they
want the 'unpriv' or 'normal' mem_index rather than having to
do the ?: themselves.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230912140434.1333369-4-peter.maydell@linaro.org
2023-09-21 16:07:13 +01:00
Peter Maydell
dbc678f90a target/arm: Implement FEAT_MOPS enable bits
FEAT_MOPS defines a handful of new enable bits:
 * HCRX_EL2.MSCEn, SCTLR_EL1.MSCEn, SCTLR_EL2.MSCen:
   define whether the new insns should UNDEF or not
 * HCRX_EL2.MCE2: defines whether memops exceptions from
   EL1 should be taken to EL1 or EL2

Since we don't sanitise what bits can be written for the SCTLR
registers, we only need to handle the new bits in HCRX_EL2, and
define SCTLR_MSCEN for the new SCTLR bit value.

The precedence of "HCRX bits acts as 0 if SCR_EL3.HXEn is 0" versus
"bit acts as 1 if EL2 disabled" is not clear from the register
definition text, but it is clear in the CheckMOPSEnabled()
pseudocode(), so we follow that.  We'll have to check whether other
bits we need to implement in future follow the same logic or not.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230912140434.1333369-3-peter.maydell@linaro.org
2023-09-21 16:07:13 +01:00
Peter Maydell
903dbefc2b target/arm: Don't skip MTE checks for LDRT/STRT at EL0
The LDRT/STRT "unprivileged load/store" instructions behave like
normal ones if executed at EL0. We handle this correctly for
the load/store semantics, but get the MTE checking wrong.

We always look at s->mte_active[is_unpriv] to see whether we should
be doing MTE checks, but in hflags.c when we set the TB flags that
will be used to fill the mte_active[] array we only set the
MTE0_ACTIVE bit if UNPRIV is true (i.e.  we are not at EL0).

This means that a LDRT at EL0 will see s->mte_active[1] as 0,
and will not do MTE checks even when MTE is enabled.

To avoid the translate-time code having to do an explicit check on
s->unpriv to see if it is OK to index into the mte_active[] array,
duplicate MTE_ACTIVE into MTE0_ACTIVE when UNPRIV is false.

(This isn't a very serious bug because generally nobody executes
LDRT/STRT at EL0, because they have no use there.)

Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230912140434.1333369-2-peter.maydell@linaro.org
2023-09-21 16:07:13 +01:00
Peter Maydell
0b5ad31d2a target/arm: Remove unused allocation_tag_mem() argument
The allocation_tag_mem() function takes an argument tag_size,
but it never uses it. Remove the argument. In mte_probe_int()
in particular this also lets us delete the code computing
the value we were passing in.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2023-09-21 16:07:13 +01:00
Peter Maydell
3039b090f2 target/arm: Implement FEAT_HBC
FEAT_HBC (Hinted conditional branches) provides a new instruction
BC.cond, which behaves exactly like the existing B.cond except
that it provides a hint to the branch predictor about the
likely behaviour of the branch.

Since QEMU does not implement branch prediction, we can treat
this identically to B.cond.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2023-09-21 16:07:13 +01:00
Peter Maydell
5f7b71fb99 target/arm: Update user-mode ID reg mask values
For user-only mode we reveal a subset of the AArch64 ID registers
to the guest, to emulate the kernel's trap-and-emulate-ID-regs
handling. Update the feature bit masks to match upstream kernel
commit a48fa7efaf1161c1c.

None of these features are yet implemented by QEMU, so this
doesn't yet have a behavioural change, but implementation of
FEAT_MOPS and FEAT_HBC is imminent.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2023-09-21 14:45:58 +01:00
Peter Maydell
4d9eb29643 target/arm: Update AArch64 ID register field definitions
Update our AArch64 ID register field definitions from the 2023-06
system register XML release:
 https://developer.arm.com/documentation/ddi0601/2023-06/

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2023-09-21 14:45:58 +01:00
Stefan Hajnoczi
d7754940d7 *: Delete checks for old host definitions
tcg/loongarch64: Generate LSX instructions
 fpu: Add conversions between bfloat16 and [u]int8
 fpu: Handle m68k extended precision denormals properly
 accel/tcg: Improve cputlb i/o organization
 accel/tcg: Simplify tlb_plugin_lookup
 accel/tcg: Remove false-negative halted assertion
 tcg: Add gvec compare with immediate and scalar operand
 tcg/aarch64: Emit BTI insns at jump landing pads
 -----BEGIN PGP SIGNATURE-----
 
 iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmUF4VIdHHJpY2hhcmQu
 aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV8wOwf+I9qNus2kV3yQxpuU
 2hqYuLXvH96l9vbqaoyx7hyyJTtrqytLGCMPmQKUdtBGtO6z7PnLNDiooGcbO+gw
 2gdfw3Q//JZUTdx+ZSujUksV0F96Tqu0zi4TdJUPNIwhCrh0K8VjiftfPfbynRtz
 KhQ1lNeO/QzcAgzKiun2NyqdPiYDmNuEIS/jYedQwQweRp/xQJ4/x8DmhGf/OiD4
 rGAcdslN+RenqgFACcJ2A1vxUGMeQv5g/Cn82FgTk0cmgcfAODMnC+WnOm8ruQdT
 snluvnh/2/r8jIhx3frKDKGtaKHCPhoCS7GNK48qejxaybvv3CJQ4qsjRIBKVrVM
 cIrsSw==
 =cTgD
 -----END PGP SIGNATURE-----

Merge tag 'pull-tcg-20230915-2' of https://gitlab.com/rth7680/qemu into staging

*: Delete checks for old host definitions
tcg/loongarch64: Generate LSX instructions
fpu: Add conversions between bfloat16 and [u]int8
fpu: Handle m68k extended precision denormals properly
accel/tcg: Improve cputlb i/o organization
accel/tcg: Simplify tlb_plugin_lookup
accel/tcg: Remove false-negative halted assertion
tcg: Add gvec compare with immediate and scalar operand
tcg/aarch64: Emit BTI insns at jump landing pads

[Resolved conflict between CPUINFO_PMULL and CPUINFO_BTI.
--Stefan]

* tag 'pull-tcg-20230915-2' of https://gitlab.com/rth7680/qemu: (39 commits)
  tcg: Map code_gen_buffer with PROT_BTI
  tcg/aarch64: Emit BTI insns at jump landing pads
  util/cpuinfo-aarch64: Add CPUINFO_BTI
  tcg: Add tcg_out_tb_start backend hook
  fpu: Handle m68k extended precision denormals properly
  fpu: Add conversions between bfloat16 and [u]int8
  accel/tcg: Introduce do_st16_mmio_leN
  accel/tcg: Introduce do_ld16_mmio_beN
  accel/tcg: Merge io_writex into do_st_mmio_leN
  accel/tcg: Merge io_readx into do_ld_mmio_beN
  accel/tcg: Replace direct use of io_readx/io_writex in do_{ld,st}_1
  accel/tcg: Merge cpu_transaction_failed into io_failed
  plugin: Simplify struct qemu_plugin_hwaddr
  accel/tcg: Use CPUTLBEntryFull.phys_addr in io_failed
  accel/tcg: Split out io_prepare and io_failed
  accel/tcg: Simplify tlb_plugin_lookup
  target/arm: Use tcg_gen_gvec_cmpi for compare vs 0
  tcg: Add gvec compare with immediate and scalar operand
  tcg/loongarch64: Implement 128-bit load & store
  tcg/loongarch64: Lower rotli_vec to vrotri
  ...

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-09-19 13:20:54 -04:00
Richard Henderson
e8967b6152 target/arm: Use tcg_gen_gvec_cmpi for compare vs 0
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20230831030904.1194667-3-richard.henderson@linaro.org>
2023-09-16 14:57:15 +00:00
Richard Henderson
a50cfdf0be target/arm: Use clmul_64
Use generic routine for 64-bit carry-less multiply.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-09-15 13:57:00 +00:00
Richard Henderson
bae25f648e target/arm: Use clmul_32* routines
Use generic routines for 32-bit carry-less multiply.
Remove our local version of pmull_d.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-09-15 13:57:00 +00:00
Richard Henderson
c6f0dcb1fd target/arm: Use clmul_16* routines
Use generic routines for 16-bit carry-less multiply.
Remove our local version of pmull_w.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-09-15 13:57:00 +00:00
Richard Henderson
8e3da4c716 target/arm: Use clmul_8* routines
Use generic routines for 8-bit carry-less multiply.
Remove our local version of pmull_h.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-09-15 13:56:59 +00:00
Stefan Hajnoczi
cb6c406e26 First RISC-V PR for 8.2
* Remove 'host' CPU from TCG
  * riscv_htif Fixup printing on big endian hosts
  * Add zmmul isa string
  * Add smepmp isa string
  * Fix page_check_range use in fault-only-first
  * Use existing lookup tables for MixColumns
  * Add RISC-V vector cryptographic instruction set support
  * Implement WARL behaviour for mcountinhibit/mcounteren
  * Add Zihintntl extension ISA string to DTS
  * Fix zfa fleq.d and fltq.d
  * Fix upper/lower mtime write calculation
  * Make rtc variable names consistent
  * Use abi type for linux-user target_ucontext
  * Add RISC-V KVM AIA Support
  * Fix riscv,pmu DT node path in the virt machine
  * Update CSR bits name for svadu extension
  * Mark zicond non-experimental
  * Fix satp_mode_finalize() when satp_mode.supported = 0
  * Fix non-KVM --enable-debug build
  * Add new extensions to hwprobe
  * Use accelerated helper for AES64KS1I
  * Allocate itrigger timers only once
  * Respect mseccfg.RLB for pmpaddrX changes
  * Align the AIA model to v1.0 ratified spec
  * Don't read the CSR in riscv_csrrw_do64
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEaukCtqfKh31tZZKWr3yVEwxTgBMFAmT+ttMACgkQr3yVEwxT
 gBN/rg/+KhOvL9xWSNb8pzlIsMQHLvndno0Sq5b9Rb/o5z1ekyYfyg6712N3JJpA
 TIfZzOIW7oYZV8gHyaBtOt8kIbrjwzGB2rpCh4blhm+yNZv7Ym9Ko6AVVzoUDo7k
 2dWkLnC+52/l3SXGeyYMJOlgUUsQMwjD6ykDEr42P6DfVord34fpTH7ftwSasO9K
 35qJQqhUCgB3fMzjKTYICN6Rm1UluijTjRNXUZXC0XZlr+UKw2jT/UsybbWVXyNs
 SmkRtF1MEVGvw+b8XOgA/nG1qVCWglTMcPvKjWMY+cY9WLM6/R9nXAV8OL/JPead
 v1LvROJNukfjNtDW6AOl5/svOJTRLbIrV5EO7Hlm1E4kftGmE5C+AKZZ/VT4ucUK
 XgqaHoXh26tFEymVjzbtyFnUHNv0zLuGelTnmc5Ps1byLSe4lT0dBaJy6Zizg0LE
 DpTR7s3LpyV3qB96Xf9bOMaTPsekUjD3dQI/3X634r36+YovRXapJDEDacN9whbU
 BSZc20NoM5UxVXFTbELQXolue/X2BRLxpzB+BDG8/cpu/MPgcCNiOZaVrr/pOo33
 6rwwrBhLSCfYAXnJ52qTUEBz0Z/FnRPza8AU/uuRYRFk6JhUXIonmO6xkzsoNKuN
 QNnih/v1J+1XqUyyT2InOoAiTotzHiWgKZKaMfAhomt2j/slz+A=
 =aqcx
 -----END PGP SIGNATURE-----

Merge tag 'pull-riscv-to-apply-20230911' of https://github.com/alistair23/qemu into staging

First RISC-V PR for 8.2

 * Remove 'host' CPU from TCG
 * riscv_htif Fixup printing on big endian hosts
 * Add zmmul isa string
 * Add smepmp isa string
 * Fix page_check_range use in fault-only-first
 * Use existing lookup tables for MixColumns
 * Add RISC-V vector cryptographic instruction set support
 * Implement WARL behaviour for mcountinhibit/mcounteren
 * Add Zihintntl extension ISA string to DTS
 * Fix zfa fleq.d and fltq.d
 * Fix upper/lower mtime write calculation
 * Make rtc variable names consistent
 * Use abi type for linux-user target_ucontext
 * Add RISC-V KVM AIA Support
 * Fix riscv,pmu DT node path in the virt machine
 * Update CSR bits name for svadu extension
 * Mark zicond non-experimental
 * Fix satp_mode_finalize() when satp_mode.supported = 0
 * Fix non-KVM --enable-debug build
 * Add new extensions to hwprobe
 * Use accelerated helper for AES64KS1I
 * Allocate itrigger timers only once
 * Respect mseccfg.RLB for pmpaddrX changes
 * Align the AIA model to v1.0 ratified spec
 * Don't read the CSR in riscv_csrrw_do64

# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEEaukCtqfKh31tZZKWr3yVEwxTgBMFAmT+ttMACgkQr3yVEwxT
# gBN/rg/+KhOvL9xWSNb8pzlIsMQHLvndno0Sq5b9Rb/o5z1ekyYfyg6712N3JJpA
# TIfZzOIW7oYZV8gHyaBtOt8kIbrjwzGB2rpCh4blhm+yNZv7Ym9Ko6AVVzoUDo7k
# 2dWkLnC+52/l3SXGeyYMJOlgUUsQMwjD6ykDEr42P6DfVord34fpTH7ftwSasO9K
# 35qJQqhUCgB3fMzjKTYICN6Rm1UluijTjRNXUZXC0XZlr+UKw2jT/UsybbWVXyNs
# SmkRtF1MEVGvw+b8XOgA/nG1qVCWglTMcPvKjWMY+cY9WLM6/R9nXAV8OL/JPead
# v1LvROJNukfjNtDW6AOl5/svOJTRLbIrV5EO7Hlm1E4kftGmE5C+AKZZ/VT4ucUK
# XgqaHoXh26tFEymVjzbtyFnUHNv0zLuGelTnmc5Ps1byLSe4lT0dBaJy6Zizg0LE
# DpTR7s3LpyV3qB96Xf9bOMaTPsekUjD3dQI/3X634r36+YovRXapJDEDacN9whbU
# BSZc20NoM5UxVXFTbELQXolue/X2BRLxpzB+BDG8/cpu/MPgcCNiOZaVrr/pOo33
# 6rwwrBhLSCfYAXnJ52qTUEBz0Z/FnRPza8AU/uuRYRFk6JhUXIonmO6xkzsoNKuN
# QNnih/v1J+1XqUyyT2InOoAiTotzHiWgKZKaMfAhomt2j/slz+A=
# =aqcx
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 11 Sep 2023 02:42:27 EDT
# gpg:                using RSA key 6AE902B6A7CA877D6D659296AF7C95130C538013
# gpg: Good signature from "Alistair Francis <alistair@alistair23.me>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg:          There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 6AE9 02B6 A7CA 877D 6D65  9296 AF7C 9513 0C53 8013

* tag 'pull-riscv-to-apply-20230911' of https://github.com/alistair23/qemu: (45 commits)
  target/riscv: don't read CSR in riscv_csrrw_do64
  target/riscv: Align the AIA model to v1.0 ratified spec
  target/riscv/pmp.c: respect mseccfg.RLB for pmpaddrX changes
  target/riscv: Allocate itrigger timers only once
  target/riscv: Use accelerated helper for AES64KS1I
  linux-user/riscv: Add new extensions to hwprobe
  hw/intc/riscv_aplic.c fix non-KVM --enable-debug build
  hw/riscv/virt.c: fix non-KVM --enable-debug build
  riscv: zicond: make non-experimental
  target/riscv: fix satp_mode_finalize() when satp_mode.supported = 0
  target/riscv: Update CSR bits name for svadu extension
  hw/riscv: virt: Fix riscv,pmu DT node path
  target/riscv: select KVM AIA in riscv virt machine
  target/riscv: update APLIC and IMSIC to support KVM AIA
  target/riscv: Create an KVM AIA irqchip
  target/riscv: check the in-kernel irqchip support
  target/riscv: support the AIA device emulation with KVM enabled
  linux-user/riscv: Use abi type for target_ucontext
  hw/intc: Make rtc variable names consistent
  hw/intc: Fix upper/lower mtime write calculation
  ...

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-09-11 09:12:12 -04:00
Stefan Hajnoczi
a7e8e30e7c target-arm queue:
* New CPU type: cortex-a710
  * Implement new architectural features:
     - FEAT_PACQARMA3
     - FEAT_EPAC
     - FEAT_Pauth2
     - FEAT_FPAC
     - FEAT_FPACCOMBINE
     - FEAT_TIDCP1
  * Xilinx Versal: Model the CFU/CFI
  * Implement RMR_ELx registers
  * Implement handling of HCR_EL2.TIDCP trap bit
  * arm/kvm: Enable support for KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE
  * hw/intc/arm_gicv3_its: Avoid maybe-uninitialized error in get_vte()
  * target/arm: Do not use gen_mte_checkN in trans_STGP
  * arm64: Restore trapless ptimer access
 -----BEGIN PGP SIGNATURE-----
 
 iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmT7VEkZHHBldGVyLm1h
 eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3v7BEACENUKCxsFHRQSLmQkoBCT9
 Lc4SJrGCbVUC6b+4s5ligZSWIoFzp/kY6NPpeRYqFa0DCxozd2T5D81/j7TpSo0C
 wUFkZfUq1nGFJ4K5arYcDwhdTtJvvc07YrSbUqufBp6uNGqhR4YmDWPECqBfOlaj
 7bgJM6axsg7FkJJh5zp4cQ4WEfp14MHWRPQWpVTI+9cxNmNymokSVRBhVFkM0Wen
 WD4C/nYud8bOxpDfR8GkIqJ+UnUMhUNEhp28QmHdwywgg0zLWOE4ysIxo55cM0+0
 FL3q45PL2e4S24UUx9dkxDBWnKEZ5qpQpPn9F6EhWzfm3n2dqr4uUnfWAEOg6NAi
 vnGS9MlL7nZo69OM3h8g7yKDfTKYm2vl9HVZ0ytFA6PLoSnaQyQwli58qnLtiid3
 17MWPoNQlq6G8tHUTPkrJjdA8XLz0iNPXe5G2kwhuM/S0Lv7ORzDc2pq4qBYLvIw
 9nV0oUWqzyE7zH6bRKxbbPw2sMI7c8qQr9QRyZeLHL7HdcY5ExvX9FH+qii5JDR/
 fZohi1pBoNNwYYTeSRnxgHiQ7OizYq0xQJhrdqcFF9voytZj1yZEZ0mp6Tq0/CIj
 YkC/vEyLYBqgrJ2JeUjbV3h1RIzQcVaXxnxwGsyMyceACd6MNMmdbjR7bZk0lNIu
 kh+aFEdKajPp56UseJiKBQ==
 =5Shq
 -----END PGP SIGNATURE-----

Merge tag 'pull-target-arm-20230908' of https://git.linaro.org/people/pmaydell/qemu-arm into staging

target-arm queue:
 * New CPU type: cortex-a710
 * Implement new architectural features:
    - FEAT_PACQARMA3
    - FEAT_EPAC
    - FEAT_Pauth2
    - FEAT_FPAC
    - FEAT_FPACCOMBINE
    - FEAT_TIDCP1
 * Xilinx Versal: Model the CFU/CFI
 * Implement RMR_ELx registers
 * Implement handling of HCR_EL2.TIDCP trap bit
 * arm/kvm: Enable support for KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE
 * hw/intc/arm_gicv3_its: Avoid maybe-uninitialized error in get_vte()
 * target/arm: Do not use gen_mte_checkN in trans_STGP
 * arm64: Restore trapless ptimer access

# -----BEGIN PGP SIGNATURE-----
#
# iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmT7VEkZHHBldGVyLm1h
# eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3v7BEACENUKCxsFHRQSLmQkoBCT9
# Lc4SJrGCbVUC6b+4s5ligZSWIoFzp/kY6NPpeRYqFa0DCxozd2T5D81/j7TpSo0C
# wUFkZfUq1nGFJ4K5arYcDwhdTtJvvc07YrSbUqufBp6uNGqhR4YmDWPECqBfOlaj
# 7bgJM6axsg7FkJJh5zp4cQ4WEfp14MHWRPQWpVTI+9cxNmNymokSVRBhVFkM0Wen
# WD4C/nYud8bOxpDfR8GkIqJ+UnUMhUNEhp28QmHdwywgg0zLWOE4ysIxo55cM0+0
# FL3q45PL2e4S24UUx9dkxDBWnKEZ5qpQpPn9F6EhWzfm3n2dqr4uUnfWAEOg6NAi
# vnGS9MlL7nZo69OM3h8g7yKDfTKYm2vl9HVZ0ytFA6PLoSnaQyQwli58qnLtiid3
# 17MWPoNQlq6G8tHUTPkrJjdA8XLz0iNPXe5G2kwhuM/S0Lv7ORzDc2pq4qBYLvIw
# 9nV0oUWqzyE7zH6bRKxbbPw2sMI7c8qQr9QRyZeLHL7HdcY5ExvX9FH+qii5JDR/
# fZohi1pBoNNwYYTeSRnxgHiQ7OizYq0xQJhrdqcFF9voytZj1yZEZ0mp6Tq0/CIj
# YkC/vEyLYBqgrJ2JeUjbV3h1RIzQcVaXxnxwGsyMyceACd6MNMmdbjR7bZk0lNIu
# kh+aFEdKajPp56UseJiKBQ==
# =5Shq
# -----END PGP SIGNATURE-----
# gpg: Signature made Fri 08 Sep 2023 13:05:13 EDT
# gpg:                using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg:                issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [full]
# gpg:                 aka "Peter Maydell <pmaydell@gmail.com>" [full]
# gpg:                 aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [full]
# gpg:                 aka "Peter Maydell <peter@archaic.org.uk>" [unknown]
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83  15CF 3C25 25ED 1436 0CDE

* tag 'pull-target-arm-20230908' of https://git.linaro.org/people/pmaydell/qemu-arm: (26 commits)
  arm/kvm: Enable support for KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE
  target/arm: Enable SCTLR_EL1.TIDCP for user-only
  target/arm: Implement FEAT_TIDCP1
  target/arm: Implement HCR_EL2.TIDCP
  target/arm: Implement cortex-a710
  target/arm: Implement RMR_ELx
  arm64: Restore trapless ptimer access
  target/arm: Do not use gen_mte_checkN in trans_STGP
  hw/arm/versal: Connect the CFRAME_REG and CFRAME_BCAST_REG
  hw/arm/xlnx-versal: Connect the CFU_APB, CFU_FDRO and CFU_SFR
  hw/misc: Introduce a model of Xilinx Versal's CFRAME_BCAST_REG
  hw/misc: Introduce a model of Xilinx Versal's CFRAME_REG
  hw/misc/xlnx-versal-cfu: Introduce a model of Xilinx Versal's CFU_SFR
  hw/misc/xlnx-versal-cfu: Introduce a model of Xilinx Versal CFU_FDRO
  hw/misc: Introduce a model of Xilinx Versal's CFU_APB
  hw/misc: Introduce the Xilinx CFI interface
  hw/intc/arm_gicv3_its: Avoid maybe-uninitialized error in get_vte()
  target/arm: Implement FEAT_FPAC and FEAT_FPACCOMBINE
  target/arm: Inform helpers whether a PAC instruction is 'combined'
  target/arm: Implement FEAT_Pauth2
  ...

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-09-11 09:10:37 -04:00
Max Chou
f6ef550fe5 crypto: Create sm4_subword
Allows sharing of sm4_subword between different targets.

Signed-off-by: Max Chou <max.chou@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Max Chou <max.chou@sifive.com>
Message-ID: <20230711165917.2629866-14-max.chou@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2023-09-11 11:45:55 +10:00
Shameer Kolothum
c8f2eb5d41 arm/kvm: Enable support for KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE
Now that we have Eager Page Split support added for ARM in the kernel,
enable it in Qemu. This adds,
 -eager-split-size to -accel sub-options to set the eager page split chunk size.
 -enable KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE.

The chunk size specifies how many pages to break at a time, using a
single allocation. Bigger the chunk size, more pages need to be
allocated ahead of time.

Reviewed-by: Gavin Shan <gshan@redhat.com>
Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
Message-id: 20230905091246.1931-1-shameerali.kolothum.thodi@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-09-08 16:41:36 +01:00
Richard Henderson
d03396a8bb target/arm: Enable SCTLR_EL1.TIDCP for user-only
The linux kernel detects and enables this bit.  Once trapped,
EC_SYSTEMREGISTERTRAP is treated like EC_UNCATEGORIZED, so
no changes required within linux-user/aarch64/cpu_loop.c.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230831232441.66020-6-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-09-08 16:41:35 +01:00
Richard Henderson
9cd0c0dec9 target/arm: Implement FEAT_TIDCP1
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230831232441.66020-5-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-09-08 16:41:35 +01:00
Richard Henderson
27920d3d1d target/arm: Implement HCR_EL2.TIDCP
Perform the check for EL2 enabled in the security space and the
TIDCP bit in an out-of-line helper.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230831232441.66020-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-09-08 16:41:35 +01:00
Richard Henderson
e3d45c0a89 target/arm: Implement cortex-a710
The cortex-a710 is a first generation ARMv9.0-A processor.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230831232441.66020-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-09-08 16:41:35 +01:00
Richard Henderson
97198a7dd1 target/arm: Implement RMR_ELx
Provide a stub implementation, as a write is a "request".

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230831232441.66020-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-09-08 16:41:35 +01:00
Colton Lewis
682814e2a3 arm64: Restore trapless ptimer access
Due to recent KVM changes, QEMU is setting a ptimer offset resulting
in unintended trap and emulate access and a consequent performance
hit. Filter out the PTIMER_CNT register to restore trapless ptimer
access.

Quoting Andrew Jones:

Simply reading the CNT register and writing back the same value is
enough to set an offset, since the timer will have certainly moved
past whatever value was read by the time it's written.  QEMU
frequently saves and restores all registers in the get-reg-list array,
unless they've been explicitly filtered out (with Linux commit
680232a94c12, KVM_REG_ARM_PTIMER_CNT is now in the array). So, to
restore trapless ptimer accesses, we need a QEMU patch to filter out
the register.

See
https://lore.kernel.org/kvmarm/gsntttsonus5.fsf@coltonlewis-kvm.c.googlers.com/T/#m0770023762a821db2a3f0dd0a7dc6aa54e0d0da9
for additional context.

Cc: qemu-stable@nongnu.org
Signed-off-by: Andrew Jones <andrew.jones@linux.dev>
Signed-off-by: Colton Lewis <coltonlewis@google.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Colton Lewis <coltonlewis@google.com>
Message-id: 20230831190052.129045-1-coltonlewis@google.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-09-08 16:41:35 +01:00
Richard Henderson
44e0ddee8e target/arm: Do not use gen_mte_checkN in trans_STGP
STGP writes to tag memory, it does not check it.
This happened to work because we wrote tag memory first
so that the check always succeeded.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230901203103.136408-1-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-09-08 16:41:35 +01:00
Aaron Lindsay
8a69a42340 target/arm: Implement FEAT_FPAC and FEAT_FPACCOMBINE
Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230829232335.965414-10-richard.henderson@linaro.org
Message-Id: <20230609172324.982888-8-aaron@os.amperecomputing.com>
[rth: Simplify fpac comparison, reusing cmp_mask]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-09-08 12:51:01 +01:00
Aaron Lindsay
28b9dcb74b target/arm: Inform helpers whether a PAC instruction is 'combined'
An instruction is a 'combined' Pointer Authentication instruction
if it does something in addition to PAC -- for instance, branching
to or loading an address from the authenticated pointer.

Knowing whether a PAC operation is 'combined' is needed to
implement FEAT_FPACCOMBINE.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230829232335.965414-9-richard.henderson@linaro.org
Message-Id: <20230609172324.982888-7-aaron@os.amperecomputing.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-09-08 12:50:44 +01:00
Aaron Lindsay
c7c807f6dd target/arm: Implement FEAT_Pauth2
Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230829232335.965414-8-richard.henderson@linaro.org
Message-Id: <20230609172324.982888-6-aaron@os.amperecomputing.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-09-08 12:50:44 +01:00
Aaron Lindsay
c3ccd5669e target/arm: Implement FEAT_EPAC
Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230829232335.965414-7-richard.henderson@linaro.org
Message-Id: <20230609172324.982888-5-aaron@os.amperecomputing.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-09-08 12:50:44 +01:00
Richard Henderson
399e5e7125 target/arm: Implement FEAT_PACQARMA3
Implement the QARMA3 cryptographic algorithm for PAC calculation.
Implement a cpu feature to select the algorithm and document it.

Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230829232335.965414-6-richard.henderson@linaro.org
Message-Id: <20230609172324.982888-4-aaron@os.amperecomputing.com>
[rth: Merge cpu feature addition from another patch.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-09-08 12:50:44 +01:00
Richard Henderson
6c3427eec5 target/arm: Don't change pauth features when changing algorithm
We have cpu properties to adjust the pauth algorithm for the
purpose of speed of emulation.  Retain the set of pauth features
supported by the cpu even as the algorithm changes.

This already affects the neoverse-v1 cpu, which has FEAT_EPAC.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230829232335.965414-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-09-08 12:50:44 +01:00
Aaron Lindsay
0274bd7be7 target/arm: Add feature detection for FEAT_Pauth2 and extensions
Rename isar_feature_aa64_pauth_arch to isar_feature_aa64_pauth_qarma5
to distinguish the other architectural algorithm qarma3.

Add ARMPauthFeature and isar_feature_pauth_feature to cover the
other pauth conditions.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230829232335.965414-4-richard.henderson@linaro.org
Message-Id: <20230609172324.982888-3-aaron@os.amperecomputing.com>
[rth: Add ARMPauthFeature and eliminate most other predicates]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-09-08 12:50:44 +01:00
Aaron Lindsay
a969fe9755 target/arm: Add ID_AA64ISAR2_EL1
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230829232335.965414-3-richard.henderson@linaro.org
[PMM: drop the HVF part of the patch and just comment that
 we need to do something when the register appears in that API]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-09-08 12:50:44 +01:00
Thomas Huth
ded625e7aa trivial: Simplify the spots that use TARGET_BIG_ENDIAN as a numeric value
TARGET_BIG_ENDIAN is *always* defined, either as 0 for little endian
targets or as 1 for big endian targets. So we can use this as a value
directly in places that need such a 0 or 1 for some reason, instead
of taking a detour through an additional local variable or something
similar.

Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2023-09-08 13:08:52 +03:00
Stefan Hajnoczi
c4e5f9a29f target-arm queue:
* Some of the preliminary patches for Cortex-A710 support
  * i.MX7 and i.MX6UL refactoring
  * Implement SRC device for i.MX7
  * Catch illegal-exception-return from EL3 with bad NSE/NS
  * Use 64-bit offsets for holding time_t differences in RTC devices
  * Model correct number of MPU regions for an505, an521, an524 boards
 -----BEGIN PGP SIGNATURE-----
 
 iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmTwbukZHHBldGVyLm1h
 eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3ihBD/wK8Iz0KpTAwZBDAodnSZrh
 tQnJAvYFp8CxA4O8sZ9IeWsZh90gzsTCZi0NqUTTzvWCJfxkB7qTPdlJT5IzVxou
 oEUk2aogSJhRA3XRJzqArXsPlnZGSYDbtwKx4VtfCvOCCH08Y7nhnFaRj1oFnR4Q
 0PE/8YtGXTBxLHrO8U3tomg7zElzOUP8ZVZtb30BOyw1jtfSD03IZR8dzpA43u1E
 Hh418WvVekmwFoFNh8yUeHzbyXMZufzvbJPuDGJ8pPWwIpvSG6chOnKF8jZll+Ur
 DqOsDkGlQgcBR2QwYfSPClrEkX8yahJ95PBfM6giG+DQC7OiElqXqTiUGZcpgUVo
 uSUbzS4YPsxCnyVV6SBXV+f/8hdXBxOSHTgl7OAFa8X9OwWwspxHJ/v2o/2ibnUT
 hTTkFp/w1nQwVEN8xf1DOUpm/J2Wr8UeH4f776daSrfKAol2BKbHb8dOgGLQCwqb
 G+iDcE4bkzRqly6f+uVk8xSEZDd9P1NYoxKV+gNlV1dTspdHVpTC+rXMa8dRw5hI
 4KgaAslj++Xa229xkjORXCJ1cICRIebYg7+SjvTtGBYsFV7plsCcYb/R9yLmhVCf
 fKHKKaYe9sQJ82apOIkTc+nnW8BQQx6XUmU/A//iZ8JGLk6DpJcZ8f1m/2rVZTsl
 9+lsmpBf4w+uR4o+Womhfw==
 =MFh3
 -----END PGP SIGNATURE-----

Merge tag 'pull-target-arm-20230831' of https://git.linaro.org/people/pmaydell/qemu-arm into staging

target-arm queue:
 * Some of the preliminary patches for Cortex-A710 support
 * i.MX7 and i.MX6UL refactoring
 * Implement SRC device for i.MX7
 * Catch illegal-exception-return from EL3 with bad NSE/NS
 * Use 64-bit offsets for holding time_t differences in RTC devices
 * Model correct number of MPU regions for an505, an521, an524 boards

# -----BEGIN PGP SIGNATURE-----
#
# iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmTwbukZHHBldGVyLm1h
# eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3ihBD/wK8Iz0KpTAwZBDAodnSZrh
# tQnJAvYFp8CxA4O8sZ9IeWsZh90gzsTCZi0NqUTTzvWCJfxkB7qTPdlJT5IzVxou
# oEUk2aogSJhRA3XRJzqArXsPlnZGSYDbtwKx4VtfCvOCCH08Y7nhnFaRj1oFnR4Q
# 0PE/8YtGXTBxLHrO8U3tomg7zElzOUP8ZVZtb30BOyw1jtfSD03IZR8dzpA43u1E
# Hh418WvVekmwFoFNh8yUeHzbyXMZufzvbJPuDGJ8pPWwIpvSG6chOnKF8jZll+Ur
# DqOsDkGlQgcBR2QwYfSPClrEkX8yahJ95PBfM6giG+DQC7OiElqXqTiUGZcpgUVo
# uSUbzS4YPsxCnyVV6SBXV+f/8hdXBxOSHTgl7OAFa8X9OwWwspxHJ/v2o/2ibnUT
# hTTkFp/w1nQwVEN8xf1DOUpm/J2Wr8UeH4f776daSrfKAol2BKbHb8dOgGLQCwqb
# G+iDcE4bkzRqly6f+uVk8xSEZDd9P1NYoxKV+gNlV1dTspdHVpTC+rXMa8dRw5hI
# 4KgaAslj++Xa229xkjORXCJ1cICRIebYg7+SjvTtGBYsFV7plsCcYb/R9yLmhVCf
# fKHKKaYe9sQJ82apOIkTc+nnW8BQQx6XUmU/A//iZ8JGLk6DpJcZ8f1m/2rVZTsl
# 9+lsmpBf4w+uR4o+Womhfw==
# =MFh3
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 31 Aug 2023 06:43:53 EDT
# gpg:                using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg:                issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [full]
# gpg:                 aka "Peter Maydell <pmaydell@gmail.com>" [full]
# gpg:                 aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [full]
# gpg:                 aka "Peter Maydell <peter@archaic.org.uk>" [unknown]
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83  15CF 3C25 25ED 1436 0CDE

* tag 'pull-target-arm-20230831' of https://git.linaro.org/people/pmaydell/qemu-arm: (24 commits)
  hw/arm: Set number of MPU regions correctly for an505, an521, an524
  hw/arm/armv7m: Add mpu-ns-regions and mpu-s-regions properties
  target/arm: Do all "ARM_FEATURE_X implies Y" checks in post_init
  rtc: Use time_t for passing and returning time offsets
  hw/rtc/aspeed_rtc: Use 64-bit offset for holding time_t difference
  hw/rtc/twl92230: Use int64_t for sec_offset and alm_sec
  hw/rtc/m48t59: Use 64-bit arithmetic in set_alarm()
  target/arm: Catch illegal-exception-return from EL3 with bad NSE/NS
  Add i.MX7 SRC device implementation
  Add i.MX7 missing TZ devices and memory regions
  Refactor i.MX7 processor code
  Add i.MX6UL missing devices.
  Refactor i.MX6UL processor code
  Remove i.MX7 IOMUX GPR device from i.MX6UL
  target/arm: properly document FEAT_CRC32
  target/arm: Implement FEAT_HPDS2 as a no-op
  target/arm: Suppress FEAT_TRBE (Trace Buffer Extension)
  target/arm: Apply access checks to neoverse-v1 special registers
  target/arm: Apply access checks to neoverse-n1 special registers
  target/arm: Introduce make_ccsidr64
  ...

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-08-31 08:31:03 -04:00
Peter Maydell
b8f7959f28 target/arm: Do all "ARM_FEATURE_X implies Y" checks in post_init
Where architecturally one ARM_FEATURE_X flag implies another
ARM_FEATURE_Y, we allow the CPU init function to only set X, and then
set Y for it.  Currently we do this in two places -- we set a few
flags in arm_cpu_post_init() because we need them to decide which
properties to create on the CPU object, and then we do the rest in
arm_cpu_realizefn().  However, this is fragile, because it's easy to
add a new property and not notice that this means that an X-implies-Y
check now has to move from realize to post-init.

As a specific example, the pmsav7-dregion property is conditional
on ARM_FEATURE_PMSA && ARM_FEATURE_V7, which means it won't appear
on the Cortex-M33 and -M55, because they set ARM_FEATURE_V8 and
rely on V8-implies-V7, which doesn't happen until the realizefn.

Move all of these X-implies-Y checks into a new function, which
we call at the top of arm_cpu_post_init(), so the feature bits
are available at that point.

This does now give us the reverse issue, that if there's a feature
bit which is enabled or disabled by the setting of a property then
then X-implies-Y features that are dependent on that property need to
be in realize, not in this new function.  But the only one of those
is the "EL3 implies VBAR" which is already in the right place, so
putting things this way round seems better to me.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230724174335.2150499-2-peter.maydell@linaro.org
2023-08-31 11:05:04 +01:00
Peter Maydell
35aa6715dd target/arm: Catch illegal-exception-return from EL3 with bad NSE/NS
The architecture requires (R_TYTWB) that an attempt to return from EL3
when SCR_EL3.{NSE,NS} are {1,0} is an illegal exception return. (This
enforces that the CPU can't ever be executing below EL3 with the
NSE,NS bits indicating an invalid security state.)

We were missing this check; add it.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807150618.101357-1-peter.maydell@linaro.org
2023-08-31 09:45:17 +01:00
Alex Bennée
9e771a2fc6 target/arm: properly document FEAT_CRC32
This is a mandatory feature for Armv8.1 architectures but we don't
state the feature clearly in our emulation list. Also include
FEAT_CRC32 comment in aarch64_max_tcg_initfn for ease of grepping.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20230824075406.1515566-1-alex.bennee@linaro.org
Cc: qemu-stable@nongnu.org
Message-Id: <20230222110104.3996971-1-alex.bennee@linaro.org>
[PMM: pluralize 'instructions' in docs]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-08-31 09:45:16 +01:00
Richard Henderson
df9a391757 target/arm: Implement FEAT_HPDS2 as a no-op
This feature allows the operating system to set TCR_ELx.HWU*
to allow the implementation to use the PBHA bits from the
block and page descriptors for for IMPLEMENTATION DEFINED
purposes.  Since QEMU has no need to use these bits, we may
simply ignore them.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230811214031.171020-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-08-31 09:45:16 +01:00
Richard Henderson
3d5f45ec89 target/arm: Suppress FEAT_TRBE (Trace Buffer Extension)
Like FEAT_TRF (Self-hosted Trace Extension), suppress tracing
external to the cpu, which is out of scope for QEMU.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230811214031.171020-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-08-31 09:45:16 +01:00
Richard Henderson
87da10b45c target/arm: Apply access checks to neoverse-v1 special registers
There is only one additional EL1 register modeled, which
also needs to use access_actlr_w.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230811214031.171020-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-08-31 09:45:15 +01:00
Richard Henderson
6d482423fc target/arm: Apply access checks to neoverse-n1 special registers
Access to many of the special registers is enabled or disabled
by ACTLR_EL[23], which we implement as constant 0, which means
that all writes outside EL3 should trap.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230811214031.171020-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-08-31 09:45:15 +01:00
Richard Henderson
d8100822d6 target/arm: Introduce make_ccsidr64
Do not hard-code the constants for Neoverse V1.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230811214031.171020-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-08-31 09:45:15 +01:00
Richard Henderson
cd305b5f31 target/arm: When tag memory is not present, set MTE=1
When the cpu support MTE, but the system does not, reduce cpu
support to user instructions at EL0 instead of completely
disabling MTE.  If we encounter a cpu implementation which does
something else, we can revisit this setting.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230811214031.171020-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-08-31 09:45:15 +01:00
Richard Henderson
7134cb07b7 target/arm: Support more GM blocksizes
Support all of the easy GM block sizes.
Use direct memory operations, since the pointers are aligned.

While BS=2 (16 bytes, 1 tag) is a legal setting, that requires
an atomic store of one nibble.  This is not difficult, but there
is also no point in supporting it until required.

Note that cortex-a710 sets GM blocksize to match its cacheline
size of 64 bytes.  I expect many implementations will also
match the cacheline, which makes 16 bytes very unlikely.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230811214031.171020-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-08-31 09:45:14 +01:00
Richard Henderson
851ec6eba5 target/arm: Allow cpu to configure GM blocksize
Previously we hard-coded the blocksize with GMID_EL1_BS.
But the value we choose for -cpu max does not match the
value that cortex-a710 uses.

Mirror the way we handle dcz_blocksize.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230811214031.171020-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-08-31 09:45:14 +01:00
Richard Henderson
ae4acc696f target/arm: Reduce dcz_blocksize to uint8_t
This value is only 4 bits wide.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20230811214031.171020-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-08-31 09:45:14 +01:00
Alex Bennée
d0e5fa849d gdbstub: replace global gdb_has_xml with a function
Try and make the self reported global hack a little less hackish by
providing a query function instead. As gdb_has_xml was always set if
we negotiated XML we can now use the presence of ->target_xml as the
test instead.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20230829161528.2707696-12-alex.bennee@linaro.org>
2023-08-30 14:57:56 +01:00
Richard Henderson
a126425990 target/arm: Use tcg_gen_negsetcond_*
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24 11:22:42 -07:00
Anton Johansson
d447a624d0 sysemu/hvf: Use vaddr for hvf_arch_[insert|remove]_hw_breakpoint
Changes the signature of the target-defined functions for
inserting/removing hvf hw breakpoints. The address and length arguments
are now of vaddr type, which both matches the type used internally in
accel/hvf/hvf-all.c and makes the api target-agnostic.

Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230807155706.9580-5-anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24 11:21:40 -07:00
Anton Johansson
b8a6eb1862 sysemu/kvm: Use vaddr for kvm_arch_[insert|remove]_hw_breakpoint
Changes the signature of the target-defined functions for
inserting/removing kvm hw breakpoints. The address and length arguments
are now of vaddr type, which both matches the type used internally in
accel/kvm/kvm-all.c and makes the api target-agnostic.

Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230807155706.9580-4-anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24 11:21:35 -07:00
Richard Henderson
cd1e4db736 target/arm: Fix 64-bit SSRA
Typo applied byte-wise shift instead of double-word shift.

Cc: qemu-stable@nongnu.org
Fixes: 631e565450 ("target/arm: Create gen_gvec_[us]sra")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1737
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20230821022025.397682-1-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-08-22 17:31:14 +01:00
Richard Henderson
4b3520fd93 target/arm: Fix SME ST1Q
A typo, noted in the bug report, resulting in an
incorrect write offset.

Cc: qemu-stable@nongnu.org
Fixes: 7390e0e9ab ("target/arm: Implement SME LD1, ST1")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1833
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20230818214255.146905-1-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-08-22 17:31:13 +01:00
Jean-Philippe Brucker
f6fc36deef target/arm/helper: Implement CNTHCTL_EL2.CNT[VP]MASK
When FEAT_RME is implemented, these bits override the value of
CNT[VP]_CTL_EL0.IMASK in Realm and Root state. Move the IRQ state update
into a new gt_update_irq() function and test those bits every time we
recompute the IRQ state.

Since we're removing the IRQ state from some trace events, add a new
trace event for gt_update_irq().

Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Message-id: 20230809123706.1842548-7-jean-philippe@linaro.org
[PMM: only register change hook if not USER_ONLY and if TCG]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
2023-08-22 17:31:13 +01:00
Jean-Philippe Brucker
1acd00ef14 target/arm/helper: Check SCR_EL3.{NSE, NS} encoding for AT instructions
The AT instruction is UNDEFINED if the {NSE,NS} configuration is
invalid. Add a function to check this on all AT instructions that apply
to an EL lower than 3.

Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Message-id: 20230809123706.1842548-6-jean-philippe@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
2023-08-22 17:31:13 +01:00
Jean-Philippe Brucker
e1ee56ec23 target/arm: Pass security space rather than flag for AT instructions
At the moment we only handle Secure and Nonsecure security spaces for
the AT instructions. Add support for Realm and Root.

For AArch64, arm_security_space() gives the desired space. ARM DDI0487J
says (R_NYXTL):

  If EL3 is implemented, then when an address translation instruction
  that applies to an Exception level lower than EL3 is executed, the
  Effective value of SCR_EL3.{NSE, NS} determines the target Security
  state that the instruction applies to.

For AArch32, some instructions can access NonSecure space from Secure,
so we still need to pass the state explicitly to do_ats_write().

Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230809123706.1842548-5-jean-philippe@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-08-22 17:31:12 +01:00
Jean-Philippe Brucker
f1269a98aa target/arm: Skip granule protection checks for AT instructions
GPC checks are not performed on the output address for AT instructions,
as stated by ARM DDI 0487J in D8.12.2:

  When populating PAR_EL1 with the result of an address translation
  instruction, granule protection checks are not performed on the final
  output address of a successful translation.

Rename get_phys_addr_with_secure(), since it's only used to handle AT
instructions.

Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230809123706.1842548-4-jean-philippe@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-08-22 17:31:12 +01:00
Jean-Philippe Brucker
ceaa97465f target/arm/helper: Fix tlbmask and tlbbits for TLBI VAE2*
When HCR_EL2.E2H is enabled, TLB entries are formed using the EL2&0
translation regime, instead of the EL2 translation regime. The TLB VAE2*
instructions invalidate the regime that corresponds to the current value
of HCR_EL2.E2H.

At the moment we only invalidate the EL2 translation regime. This causes
problems with RMM, which issues TLBI VAE2IS instructions with
HCR_EL2.E2H enabled. Update vae2_tlbmask() to take HCR_EL2.E2H into
account.

Add vae2_tlbbits() as well, since the top-byte-ignore configuration is
different between the EL2&0 and EL2 regime.

Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230809123706.1842548-3-jean-philippe@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-08-22 17:31:11 +01:00
Jean-Philippe Brucker
da64251e93 target/arm/ptw: Load stage-2 tables from realm physical space
In realm state, stage-2 translation tables are fetched from the realm
physical address space (R_PGRQD).

Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230809123706.1842548-2-jean-philippe@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-08-22 17:31:11 +01:00
Peter Maydell
b17d86eb5e target/arm: Adjust PAR_EL1.SH for Device and Normal-NC memory types
The PAR_EL1.SH field documents that for the cases of:
 * Device memory
 * Normal memory with both Inner and Outer Non-Cacheable
the field should be 0b10 rather than whatever was in the
translation table descriptor field. (In the pseudocode this
is handled by PAREncodeShareability().) Perform this
adjustment when assembling a PAR value.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807141514.19075-16-peter.maydell@linaro.org
2023-08-22 17:31:10 +01:00
Peter Maydell
a729d63642 target/arm/ptw: Report stage 2 fault level for stage 2 faults on stage 1 ptw
When we report faults due to stage 2 faults during a stage 1
page table walk, the 'level' parameter should be the level
of the walk in stage 2 that faulted, not the level of the
walk in stage 1. Correct the reporting of these faults.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807141514.19075-15-peter.maydell@linaro.org
2023-08-22 17:31:10 +01:00
Peter Maydell
d53e25075b target/arm/ptw: Check for block descriptors at invalid levels
The architecture doesn't permit block descriptors at any arbitrary
level of the page table walk; it depends on the granule size which
levels are permitted.  We implemented only a partial version of this
check which assumes that block descriptors are valid at all levels
except level 3, which meant that we wouldn't deliver the Translation
fault for all cases of this sort of guest page table error.

Implement the logic corresponding to the pseudocode
AArch64.DecodeDescriptorType() and AArch64.BlockDescSupported().

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807141514.19075-14-peter.maydell@linaro.org
2023-08-22 17:31:10 +01:00
Peter Maydell
3d9ca96221 target/arm/ptw: Set attributes correctly for MMU disabled data accesses
When the MMU is disabled, data accesses should be Device nGnRnE,
Outer Shareable, Untagged.  We handle the other cases from
AArch64.S1DisabledOutput() correctly but missed this one.
Device nGnRnE is memattr == 0, so the only part we were missing
was that shareability should be set to 2 for both insn fetches
and data accesses.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807141514.19075-13-peter.maydell@linaro.org
2023-08-22 17:31:09 +01:00
Peter Maydell
b02f5e06bc target/arm/ptw: Drop S1Translate::out_secure
We only use S1Translate::out_secure in two places, where we are
setting up MemTxAttrs for a page table load. We can use
arm_space_is_secure(ptw->out_space) instead, which guarantees
that we're setting the MemTxAttrs secure and space fields
consistently, and allows us to drop the out_secure field in
S1Translate entirely.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807141514.19075-12-peter.maydell@linaro.org
2023-08-22 17:31:09 +01:00
Peter Maydell
6279f6dcdb target/arm/ptw: Remove S1Translate::in_secure
We no longer look at the in_secure field of the S1Translate struct
anyway, so we can remove it and all the code which sets it.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807141514.19075-11-peter.maydell@linaro.org
2023-08-22 17:31:08 +01:00
Peter Maydell
cdbae5e7e1 target/arm/ptw: Remove last uses of ptw->in_secure
Replace the last uses of ptw->in_secure with appropriate
checks on ptw->in_space.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807141514.19075-10-peter.maydell@linaro.org
2023-08-22 17:31:08 +01:00
Peter Maydell
b9c139dc58 target/arm/ptw: Only fold in NSTable bit effects in Secure state
When we do a translation in Secure state, the NSTable bits in table
descriptors may downgrade us to NonSecure; we update ptw->in_secure
and ptw->in_space accordingly.  We guard that check correctly with a
conditional that means it's only applied for Secure stage 1
translations.  However, later on in get_phys_addr_lpae() we fold the
effects of the NSTable bits into the final descriptor attributes
bits, and there we do it unconditionally regardless of the CPU state.
That means that in Realm state (where in_secure is false) we will set
bit 5 in attrs, and later use it to decide to output to non-secure
space.

We don't in fact need to do this folding in at all any more (since
commit 2f1ff4e7b9): if an NSTable bit was set then we have
already set ptw->in_space to ARMSS_NonSecure, and in that situation
we don't look at attrs bit 5.  The only thing we still need to deal
with is the real NS bit in the final descriptor word, so we can just
drop the code that ORed in the NSTable bit.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807141514.19075-9-peter.maydell@linaro.org
2023-08-22 17:31:08 +01:00
Peter Maydell
4477020d38 target/arm: Pass an ARMSecuritySpace to arm_is_el2_enabled_secstate()
Pass an ARMSecuritySpace instead of a bool secure to
arm_is_el2_enabled_secstate(). This doesn't change behaviour.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807141514.19075-8-peter.maydell@linaro.org
2023-08-22 17:31:07 +01:00
Peter Maydell
2d12bb96bd target/arm/ptw: Pass an ARMSecuritySpace to arm_hcr_el2_eff_secstate()
arm_hcr_el2_eff_secstate() takes a bool secure, which it uses to
determine whether EL2 is enabled in the current security state.
With the advent of FEAT_RME this is no longer sufficient, because
EL2 can be enabled for Secure state but not for Root, and both
of those will pass 'secure == true' in the callsites in ptw.c.

As it happens in all of our callsites in ptw.c we either avoid making
the call or else avoid using the returned value if we're doing a
translation for Root, so this is not a behaviour change even if the
experimental FEAT_RME is enabled.  But it is less confusing in the
ptw.c code if we avoid the use of a bool secure that duplicates some
of the information in the ArmSecuritySpace argument.

Make arm_hcr_el2_eff_secstate() take an ARMSecuritySpace argument
instead. Because we always want to know the HCR_EL2 for the
security state defined by the current effective value of
SCR_EL3.{NSE,NS}, it makes no sense to pass ARMSS_Root here,
and we assert that callers don't do that.

To avoid the assert(), we thus push the call to
arm_hcr_el2_eff_secstate() down into the cases in
regime_translation_disabled() that need it, rather than calling the
function and ignoring the result for the Root space translations.
All other calls to this function in ptw.c are already in places
where we have confirmed that the mmu_idx is a stage 2 translation
or that the regime EL is not 3.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807141514.19075-7-peter.maydell@linaro.org
2023-08-22 17:31:07 +01:00
Peter Maydell
d1289140a0 target/arm/ptw: Pass ARMSecurityState to regime_translation_disabled()
Plumb the ARMSecurityState through to regime_translation_disabled()
rather than just a bool is_secure.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807141514.19075-6-peter.maydell@linaro.org
2023-08-22 17:31:06 +01:00
Peter Maydell
a5637bec4c target/arm/ptw: Pass ptw into get_phys_addr_pmsa*() and get_phys_addr_disabled()
In commit 6d2654ffac we created the S1Translate struct and
used it to plumb through various arguments that we were previously
passing one-at-a-time to get_phys_addr_v5(), get_phys_addr_v6(), and
get_phys_addr_lpae().  Extend that pattern to get_phys_addr_pmsav5(),
get_phys_addr_pmsav7(), get_phys_addr_pmsav8() and
get_phys_addr_disabled(), so that all the get_phys_addr_* functions
we call from get_phys_addr_nogpc() take the S1Translate struct rather
than the mmu_idx and is_secure bool.

(This refactoring is a prelude to having the called functions look
at ptw->is_space rather than using an is_secure boolean.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807141514.19075-5-peter.maydell@linaro.org
2023-08-22 17:31:06 +01:00
Peter Maydell
4f51edd3cd target/arm/ptw: Set s1ns bit in fault info more consistently
The s1ns bit in ARMMMUFaultInfo is documented as "true if
we faulted on a non-secure IPA while in secure state". Both the
places which look at this bit only do so after having confirmed
that this is a stage 2 fault and we're dealing with Secure EL2,
which leaves the ptw.c code free to set the bit to any random
value in the other cases.

Instead of taking advantage of that freedom, consistently
make the bit be set to false for the "not a stage 2 fault
for Secure EL2" cases. This removes some cases where we
were using an 'is_secure' boolean and leaving the reader
guessing about whether that was the right thing for Realm
and Root cases.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807141514.19075-4-peter.maydell@linaro.org
2023-08-22 17:31:05 +01:00
Peter Maydell
f641566074 target/arm/ptw: Don't report GPC faults on stage 1 ptw as stage2 faults
In S1_ptw_translate() we set up the ARMMMUFaultInfo if the attempt to
translate the page descriptor address into a physical address fails.
This used to only be possible if we are doing a stage 2 ptw for that
descriptor address, and so the code always sets fi->stage2 and
fi->s1ptw to true.  However, with FEAT_RME it is also possible for
the lookup of the page descriptor address to fail because of a
Granule Protection Check fault.  These should not be reported as
stage 2, otherwise arm_deliver_fault() will incorrectly set
HPFAR_EL2.  Similarly the s1ptw bit should only be set for stage 2
faults on stage 1 translation table walks, i.e.  not for GPC faults.

Add a comment to the the other place where we might detect a
stage2-fault-on-stage-1-ptw, in arm_casq_ptw(), noting why we know in
that case that it must really be a stage 2 fault and not a GPC fault.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807141514.19075-3-peter.maydell@linaro.org
2023-08-22 17:31:05 +01:00
Peter Maydell
c986d86039 target/arm/ptw: Don't set fi->s1ptw for UnsuppAtomicUpdate fault
For an Unsupported Atomic Update fault where the stage 1 translation
table descriptor update can't be done because it's to an unsupported
memory type, this is a stage 1 abort (per the Arm ARM R_VSXXT).  This
means we should not set fi->s1ptw, because this will cause the code
in the get_phys_addr_lpae() error-exit path to mark it as stage 2.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807141514.19075-2-peter.maydell@linaro.org
2023-08-22 17:31:05 +01:00
Akihiko Odaki
1ab445af8c accel/kvm: Specify default IPA size for arm64
Before this change, the default KVM type, which is used for non-virt
machine models, was 0.

The kernel documentation says:
> On arm64, the physical address size for a VM (IPA Size limit) is
> limited to 40bits by default. The limit can be configured if the host
> supports the extension KVM_CAP_ARM_VM_IPA_SIZE. When supported, use
> KVM_VM_TYPE_ARM_IPA_SIZE(IPA_Bits) to set the size in the machine type
> identifier, where IPA_Bits is the maximum width of any physical
> address used by the VM. The IPA_Bits is encoded in bits[7-0] of the
> machine type identifier.
>
> e.g, to configure a guest to use 48bit physical address size::
>
>     vm_fd = ioctl(dev_fd, KVM_CREATE_VM, KVM_VM_TYPE_ARM_IPA_SIZE(48));
>
> The requested size (IPA_Bits) must be:
>
>  ==   =========================================================
>   0   Implies default size, 40bits (for backward compatibility)
>   N   Implies N bits, where N is a positive integer such that,
>       32 <= N <= Host_IPA_Limit
>  ==   =========================================================

> Host_IPA_Limit is the maximum possible value for IPA_Bits on the host
> and is dependent on the CPU capability and the kernel configuration.
> The limit can be retrieved using KVM_CAP_ARM_VM_IPA_SIZE of the
> KVM_CHECK_EXTENSION ioctl() at run-time.
>
> Creation of the VM will fail if the requested IPA size (whether it is
> implicit or explicit) is unsupported on the host.
https://docs.kernel.org/virt/kvm/api.html#kvm-create-vm

So if Host_IPA_Limit < 40, specifying 0 as the type will fail. This
actually confused libvirt, which uses "none" machine model to probe the
KVM availability, on M2 MacBook Air.

Fix this by using Host_IPA_Limit as the default type when
KVM_CAP_ARM_VM_IPA_SIZE is available.

Cc: qemu-stable@nongnu.org
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Message-id: 20230727073134.134102-3-akihiko.odaki@daynix.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-08-22 17:31:02 +01:00
Akihiko Odaki
5e0d65909c kvm: Introduce kvm_arch_get_default_type hook
kvm_arch_get_default_type() returns the default KVM type. This hook is
particularly useful to derive a KVM type that is valid for "none"
machine model, which is used by libvirt to probe the availability of
KVM.

For MIPS, the existing mips_kvm_type() is reused. This function ensures
the availability of VZ which is mandatory to use KVM on the current
QEMU.

Cc: qemu-stable@nongnu.org
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Message-id: 20230727073134.134102-2-akihiko.odaki@daynix.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: added doc comment for new function]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2023-08-22 17:31:02 +01:00
Peter Maydell
71054f72f1 target/arm/tcg: Don't build AArch64 decodetree files for qemu-system-arm
Currently we list all the Arm decodetree files together and add them
unconditionally to arm_ss.  This means we build them for both
qemu-system-aarch64 and qemu-system-arm.  However, some of them are
AArch64-specific, so there is no need to build them for
qemu-system-arm.  (Meson is smart enough to notice that the generated
.c.inc file is not used by any objects that go into qemu-system-arm,
so we only unnecessarily run decodetree, not anything more
heavyweight like a recompile or relink, but it's still unnecessary
work.)

Split gen into gen_a32 and gen_a64, and only add gen_a64 for
TARGET_AARCH64 compiles.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20230718104628.1137734-1-peter.maydell@linaro.org
2023-07-31 11:41:21 +01:00
Peter Maydell
2b0d656ab6 target/arm: Avoid writing to constant TCGv in trans_CSEL()
In commit 0b188ea05a we changed the implementation of
trans_CSEL() to use tcg_constant_i32(). However, this change
was incorrect, because the implementation of the function
sets up the TCGv_i32 rn and rm to be either zero or else
a TCG temp created in load_reg(), and these TCG temps are
then in both cases written to by the emitted TCG ops.
The result is that we hit a TCG assertion:

qemu-system-arm: ../../tcg/tcg.c:4455: tcg_reg_alloc_mov: Assertion `!temp_readonly(ots)' failed.

(or on a non-debug build, just produce a garbage result)

Adjust the code so that rn and rm are always writeable
temporaries whether the instruction is using the special
case "0" or a normal register as input.

Cc: qemu-stable@nongnu.org
Fixes: 0b188ea05a ("target/arm: Use tcg_constant in trans_CSEL")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230727103906.2641264-1-peter.maydell@linaro.org
2023-07-31 11:40:24 +01:00
Richard Henderson
638511e992 target/arm: Fix MemOp for STGP
When converting to decodetree, the code to rebuild mop for the pair
only made it into trans_STP and not into trans_STGP.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1790
Fixes: 8c212eb659 ("target/arm: Convert load/store-pair to decodetree")
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230726165416.309624-1-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-07-31 11:17:27 +01:00
Peter Maydell
0b58dc4561 trivial-patches 25-07-2023
-----BEGIN PGP SIGNATURE-----
 
 iQFDBAABCAAtFiEEe3O61ovnosKJMUsicBtPaxppPlkFAmS/2vgPHG1qdEB0bHMu
 bXNrLnJ1AAoJEHAbT2saaT5ZT6MH/j5L3P9yLV6TqW+DkhCppbmBygqxz2SbQjwl
 dVVfSLpJNbtpvLfEnvpb+ms+ZdaOCGj8IofAVf9w0VaIYJFP1srFphY/1x+RYVnw
 kDjCLzuLNSCAdCV2HPqsrMKzdFctZ/MfK+QzfcGik9IvmCNPYWOhpmevs+xAIEJd
 b0xk152zy2fIIC3vKK+3KcM7MFkqZWJ6z0pzUZAyEBS+aQyuZNPJ/cO8xMXotbP2
 jqv12SNGV2GLH1acvsd8GQwDB9MamstB4r8NWpSpT/AyPwOgmMR+j5B8a/WEBJCs
 OcEW/pEyrumSygqf9z01YoNJQUCSvSpg5aq4+S2cRDslmUgFDmw=
 =wCoQ
 -----END PGP SIGNATURE-----

Merge tag 'pull-trivial-patches' of https://gitlab.com/mjt0k/qemu into staging

trivial-patches 25-07-2023

# -----BEGIN PGP SIGNATURE-----
#
# iQFDBAABCAAtFiEEe3O61ovnosKJMUsicBtPaxppPlkFAmS/2vgPHG1qdEB0bHMu
# bXNrLnJ1AAoJEHAbT2saaT5ZT6MH/j5L3P9yLV6TqW+DkhCppbmBygqxz2SbQjwl
# dVVfSLpJNbtpvLfEnvpb+ms+ZdaOCGj8IofAVf9w0VaIYJFP1srFphY/1x+RYVnw
# kDjCLzuLNSCAdCV2HPqsrMKzdFctZ/MfK+QzfcGik9IvmCNPYWOhpmevs+xAIEJd
# b0xk152zy2fIIC3vKK+3KcM7MFkqZWJ6z0pzUZAyEBS+aQyuZNPJ/cO8xMXotbP2
# jqv12SNGV2GLH1acvsd8GQwDB9MamstB4r8NWpSpT/AyPwOgmMR+j5B8a/WEBJCs
# OcEW/pEyrumSygqf9z01YoNJQUCSvSpg5aq4+S2cRDslmUgFDmw=
# =wCoQ
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 25 Jul 2023 15:23:52 BST
# gpg:                using RSA key 7B73BAD68BE7A2C289314B22701B4F6B1A693E59
# gpg:                issuer "mjt@tls.msk.ru"
# gpg: Good signature from "Michael Tokarev <mjt@tls.msk.ru>" [full]
# gpg:                 aka "Michael Tokarev <mjt@corpit.ru>" [full]
# gpg:                 aka "Michael Tokarev <mjt@debian.org>" [full]
# Primary key fingerprint: 6EE1 95D1 886E 8FFB 810D  4324 457C E0A0 8044 65C5
#      Subkey fingerprint: 7B73 BAD6 8BE7 A2C2 8931  4B22 701B 4F6B 1A69 3E59

* tag 'pull-trivial-patches' of https://gitlab.com/mjt0k/qemu:
  qapi: Correct "eg." to "e.g." in documentation
  hw/pci: add comment to explain checking for available function 0 in pci hotplug
  target/tricore: Rename tricore_feature
  hw/9pfs: spelling fixes
  other architectures: spelling fixes
  arm: spelling fixes
  s390x: spelling fixes
  migration: spelling fixes

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-07-25 16:30:52 +01:00
Michael Tokarev
673d821541 arm: spelling fixes
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
2023-07-25 17:13:53 +03:00
Peter Maydell
5d78893f39 target/arm: Special case M-profile in debug_helper.c code
A lot of the code called from helper_exception_bkpt_insn() is written
assuming A-profile, but we will also call this helper on M-profile
CPUs when they execute a BKPT insn.  This used to work by accident,
but recent changes mean that we will hit an assert when some of this
code calls down into lower level functions that end up calling
arm_security_space_below_el3(), arm_el_is_aa64(), and other functions
that now explicitly assert that the guest CPU is not M-profile.

Handle M-profile directly to avoid the assertions:
 * in arm_debug_target_el(), M-profile debug exceptions always
   go to EL1
 * in arm_debug_exception_fsr(), M-profile always uses the short
   format FSR (compare commit d7fe699be5, though in this case
   the code in arm_v7m_cpu_do_interrupt() does not need to
   look at the FSR value at all)

Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1775
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230721143239.1753066-1-peter.maydell@linaro.org
2023-07-25 10:56:51 +01:00
Peter Maydell
eeb9578c36 target/arm/ptw.c: Account for FEAT_RME when applying {N}SW, SA bits
In get_phys_addr_twostage() the code that applies the effects of
VSTCR.{SA,SW} and VTCR.{NSA,NSW} only updates result->f.attrs.secure.
Now we also have f.attrs.space for FEAT_RME, we need to keep the two
in sync.

These bits only have an effect for Secure space translations, not
for Root, so use the input in_space field to determine whether to
apply them rather than the input is_secure. This doesn't actually
make a difference because Root translations are never two-stage,
but it's a little clearer.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230710152130.3928330-4-peter.maydell@linaro.org
2023-07-17 11:05:08 +01:00
Peter Maydell
3f74da440d target/arm: Fix S1_ptw_translate() debug path
In commit fe4a5472cc we rearranged the logic in S1_ptw_translate()
so that the debug-access "call get_phys_addr_*" codepath is used both
when S1 is doing ptw reads from stage 2 and when it is doing ptw
reads from physical memory.  However, we didn't update the
calculation of s2ptw->in_space and s2ptw->in_secure to account for
the "ptw reads from physical memory" case.  This meant that debug
accesses when in Secure state broke.

Create a new function S2_security_space() which returns the
correct security space to use for the ptw load, and use it to
determine the correct .in_secure and .in_space fields for the
stage 2 lookup for the ptw load.

Reported-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230710152130.3928330-3-peter.maydell@linaro.org
Fixes: fe4a5472cc ("target/arm: Use get_phys_addr_with_struct in S1_ptw_translate")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-07-17 11:05:08 +01:00
Peter Maydell
34eed55127 target/arm/ptw.c: Add comments to S1Translate struct fields
Add comments to the in_* fields in the S1Translate struct
that explain what they're doing.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230710152130.3928330-2-peter.maydell@linaro.org
2023-07-17 11:05:07 +01:00
Richard Henderson
bdb01515ed target/arm: Use aesdec_IMC
This implements the AESIMC instruction.  We have converted everything
to crypto/aes-round.h; crypto/aes.h is no longer needed.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-07-09 13:47:05 +01:00
Richard Henderson
8b103ed70e target/arm: Use aesenc_MC
This implements the AESMC instruction.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-07-09 13:46:53 +01:00
Richard Henderson
2a8b545ffd target/arm: Use aesdec_ISB_ISR_AK
This implements the AESD instruction.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-07-09 13:46:47 +01:00
Richard Henderson
552d892494 target/arm: Use aesenc_SB_SR_AK
This implements the AESE instruction.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-07-09 13:46:36 +01:00
Richard Henderson
0f23908c5c target/arm: Demultiplex AESE and AESMC
Split these helpers so that we are not passing 'decrypt'
within the simd descriptor.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-07-08 07:30:18 +01:00
Richard Henderson
fb250c59aa target/arm: Move aesmc and aesimc tables to crypto/aes.c
We do not currently have a table in crypto/ for just MixColumns.
Move both tables for consistency.

Acked-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-07-08 07:30:17 +01:00
Peter Maydell
c410772351 target/arm: Avoid over-length shift in arm_cpu_sve_finalize() error case
If you build QEMU with the clang sanitizer enabled, you can see it
fire when running the arm-cpu-features test:

$ QTEST_QEMU_BINARY=./build/arm-clang/qemu-system-aarch64 ./build/arm-clang/tests/qtest/arm-cpu-features
[...]
../../target/arm/cpu64.c:125:19: runtime error: shift exponent 64 is too large for 64-bit type 'unsigned long long'
[...]

This happens because the user can specify some incorrect SVE
properties that result in our calculating a max_vq of 0.  We catch
this and error out, but before we do that we calculate

 vq_mask = MAKE_64BIT_MASK(0, max_vq);$

and the MAKE_64BIT_MASK() call is only valid for lengths that are
greater than zero, so we hit the undefined behaviour.

Change the logic so that if max_vq is 0 we specifically set vq_mask
to 0 without going via MAKE_64BIT_MASK().  This lets us drop the
max_vq check from the error-exit logic, because if max_vq is 0 then
vq_map must now be 0.

The UB only happens in the case where the user passed us an incorrect
set of SVE properties, so it's not a big problem in practice.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230704154332.3014896-1-peter.maydell@linaro.org
2023-07-06 13:36:51 +01:00
Peter Maydell
c74138c6c0 target/arm: Define neoverse-v1
Now that we have implemented support for FEAT_LSE2, we can define
a CPU model for the Neoverse-V1, and enable it for the virt and
sbsa-ref boards.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230704130647.2842917-3-peter.maydell@linaro.org
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2023-07-06 13:30:10 +01:00
Peter Maydell
7d8c283e10 target/arm: Suppress more TCG unimplemented features in ID registers
We already squash the ID register field for FEAT_SPE (the Statistical
Profiling Extension) because TCG does not implement it and if we
advertise it to the guest the guest will crash trying to look at
non-existent system registers.  Do the same for some other features
which a real hardware Neoverse-V1 implements but which TCG doesn't:
 * FEAT_TRF (Self-hosted Trace Extension)
 * Trace Macrocell system register access
 * Memory mapped trace
 * FEAT_AMU (Activity Monitors Extension)
 * FEAT_MPAM (Memory Partitioning and Monitoring Extension)
 * FEAT_NV (Nested Virtualization)

Most of these, like FEAT_SPE, are "introspection/trace" type features
which QEMU is unlikely to ever implement.  The odd-one-out here is
FEAT_NV -- we could implement that and at some point we probably
will.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230704130647.2842917-2-peter.maydell@linaro.org
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2023-07-06 13:28:08 +01:00
Fabiano Rosas
893ca916c0 target/arm: gdbstub: Guard M-profile code with CONFIG_TCG
This code is only relevant when TCG is present in the build. Building
with --disable-tcg --enable-xen on an x86 host we get:

$ ../configure --target-list=x86_64-softmmu,aarch64-softmmu --disable-tcg --enable-xen
$ make -j$(nproc)
...
libqemu-aarch64-softmmu.fa.p/target_arm_gdbstub.c.o: in function `m_sysreg_ptr':
 ../target/arm/gdbstub.c:358: undefined reference to `arm_v7m_get_sp_ptr'
 ../target/arm/gdbstub.c:361: undefined reference to `arm_v7m_get_sp_ptr'

libqemu-aarch64-softmmu.fa.p/target_arm_gdbstub.c.o: in function `arm_gdb_get_m_systemreg':
../target/arm/gdbstub.c:405: undefined reference to `arm_v7m_mrs_control'

Signed-off-by: Fabiano Rosas <farosas@suse.de>
Message-id: 20230628164821.16771-1-farosas@suse.de
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-07-06 13:26:43 +01:00
John Högberg
9719f125b8 target/arm: Handle IC IVAU to improve compatibility with JITs
Unlike architectures with precise self-modifying code semantics
(e.g. x86) ARM processors do not maintain coherency for instruction
execution and memory, requiring an instruction synchronization
barrier on every core that will execute the new code, and on many
models also the explicit use of cache management instructions.

While this is required to make JITs work on actual hardware, QEMU
has gotten away with not handling this since it does not emulate
caches, and unconditionally invalidates code whenever the softmmu
or the user-mode page protection logic detects that code has been
modified.

Unfortunately the latter does not work in the face of dual-mapped
code (a common W^X workaround), where one page is executable and
the other is writable: user-mode has no way to connect one with the
other as that is only known to the kernel and the emulated
application.

This commit works around the issue by telling software that
instruction cache invalidation is required by clearing the
CPR_EL0.DIC flag (regardless of whether the emulated processor
needs it), and then invalidating code in IC IVAU instructions.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1034

Co-authored-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: John Högberg <john.hogberg@ericsson.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 168778890374.24232.3402138851538068785-1@git.sr.ht
[PMM: removed unnecessary AArch64 feature check; moved
 "clear CTR_EL1.DIC" code up a bit so it's not in the middle
 of the vfp/neon related tests]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-07-06 12:58:42 +01:00
Richard Henderson
1f51573f79 target/arm: Fix SME full tile indexing
For the outer product set of insns, which take an entire matrix
tile as output, the argument is not a combined tile+column.
Therefore using get_tile_rowcol was incorrect, as we extracted
the tile number from itself.

The test case relies only on assembler support for SME, since
no release of GCC recognizes -march=armv9-a+sme yet.

Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1620
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230622151201.1578522-5-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: dropped now-unneeded changes to sysregs CFLAGS]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-07-06 12:56:21 +01:00
Richard Henderson
270bea47a2 target/arm: Dump ZA[] when active
Always print each matrix row whole, one per line, so that we
get the entire matrix in the proper shape.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230622151201.1578522-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-07-06 12:56:19 +01:00
Richard Henderson
a9d8407016 target/arm: Avoid splitting Zregs across lines in dump
Allow the line length to extend to 548 columns.  While annoyingly wide,
it's still less confusing than the continuations we print.  Also, the
default VL used by Linux (and max for A64FX) uses only 140 columns.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230622151201.1578522-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-07-06 12:56:15 +01:00
Eric Auger
587f8b333c target/arm: Add raw_writes ops for register whose write induce TLB maintenance
Some registers whose 'cooked' writefns induce TLB maintenance do
not have raw_writefn ops defined. If only the writefn ops is set
(ie. no raw_writefn is provided), it is assumed the cooked also
work as the raw one. For those registers it is not obvious the
tlb_flush works on KVM mode so better/safer setting the raw write.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-07-04 14:08:47 +01:00
Alex Bennée
6d03226b42 plugins: force slow path when plugins instrument memory ops
The lack of SVE memory instrumentation has been an omission in plugin
handling since it was introduced. Fortunately we can utilise the
probe_* functions to force all all memory access to follow the slow
path. We do this by checking the access type and presence of plugin
memory callbacks and if set return the TLB_MMIO flag.

We have to jump through a few hoops in user mode to re-use the flag
but it was the desired effect:

 ./qemu-system-aarch64 -display none -serial mon:stdio \
   -M virt -cpu max -semihosting-config enable=on \
   -kernel ./tests/tcg/aarch64-softmmu/memory-sve \
   -plugin ./contrib/plugins/libexeclog.so,ifilter=st1w,afilter=0x40001808 -d plugin

gives (disas doesn't currently understand st1w):

  0, 0x40001808, 0xe54342a0, ".byte 0xa0, 0x42, 0x43, 0xe5", store, 0x40213010, RAM, store, 0x40213014, RAM, store, 0x40213018, RAM

And for user-mode:

  ./qemu-aarch64 \
    -plugin contrib/plugins/libexeclog.so,afilter=0x4007c0 \
    -d plugin \
    ./tests/tcg/aarch64-linux-user/sha512-sve

gives:

  1..10
  ok 1 - do_test(&tests[i])
  0, 0x4007c0, 0xa4004b80, ".byte 0x80, 0x4b, 0x00, 0xa4", load, 0x5500800370, load, 0x5500800371, load, 0x5500800372, load, 0x5500800373, load, 0x5500800374, load, 0x5500800375, load, 0x5500800376, load, 0x5500800377, load, 0x5500800378, load, 0x5500800379, load, 0x550080037a, load, 0x550080037b, load, 0x550080037c, load, 0x550080037d, load, 0x550080037e, load, 0x550080037f, load, 0x5500800380, load, 0x5500800381, load, 0x5500800382, load, 0x5500800383, load, 0x5500800384, load, 0x5500800385, load, 0x5500800386, lo
  ad, 0x5500800387, load, 0x5500800388, load, 0x5500800389, load, 0x550080038a, load, 0x550080038b, load, 0x550080038c, load, 0x550080038d, load, 0x550080038e, load, 0x550080038f, load, 0x5500800390, load, 0x5500800391, load, 0x5500800392, load, 0x5500800393, load, 0x5500800394, load, 0x5500800395, load, 0x5500800396, load, 0x5500800397, load, 0x5500800398, load, 0x5500800399, load, 0x550080039a, load, 0x550080039b, load, 0x550080039c, load, 0x550080039d, load, 0x550080039e, load, 0x550080039f, load, 0x55008003a0, load, 0x55008003a1, load, 0x55008003a2, load, 0x55008003a3, load, 0x55008003a4, load, 0x55008003a5, load, 0x55008003a6, load, 0x55008003a7, load, 0x55008003a8, load, 0x55008003a9, load, 0x55008003aa, load, 0x55008003ab, load, 0x55008003ac, load, 0x55008003ad, load, 0x55008003ae, load, 0x55008003af

(4007c0 is the ld1b in the sha512-sve)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Cc: Robert Henry <robhenry@microsoft.com>
Cc: Aaron Lindsay <aaron@os.amperecomputing.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20230630180423.558337-20-alex.bennee@linaro.org>
2023-07-03 12:51:58 +01:00
Alex Bennée
465af4db96 target/arm: make arm_casq_ptw CONFIG_TCG only
The ptw code is accessed by non-TCG code (specifically arm_pamax and
arm_cpu_get_phys_page_attrs_debug) but most of it is really only for
TCG emulation. Seeing as we already assert for a non TARGET_AARCH64
build lets extend the test rather than further messing with the ifdef
ladder.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20230630180423.558337-19-alex.bennee@linaro.org>
2023-07-03 12:51:58 +01:00
Richard Henderson
34d03ad963 target/arm: Use float64_to_int32_modulo for FJCVTZS
The standard floating point results are provided by the generic routine.
We only need handle the extra Z flag result afterward.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20230527141910.1885950-5-richard.henderson@linaro.org>
2023-07-01 08:26:54 +02:00
Isaku Yamahata
14a868c626 exec/memory: Add symbol for the min value of memory listener priority
Add MEMORY_LISTNER_PRIORITY_MIN for the symbolic value for the min value of
the memory listener instead of the hard-coded magic value 0.  Add explicit
initialization.

No functional change intended.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <29f88477fe82eb774bcfcae7f65ea21995f865f2.1687279702.git.isaku.yamahata@intel.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2023-06-28 14:27:59 +02:00
Philippe Mathieu-Daudé
cf43b5b69c target/arm: Restrict KVM-specific fields from ArchCPU
These fields shouldn't be accessed when KVM is not available.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230405160454.97436-8-philmd@linaro.org>
2023-06-28 14:27:59 +02:00
Philippe Mathieu-Daudé
0c40daf038 hw/intc/arm_gic: Un-inline GIC*/ITS class_name() helpers
"kvm_arm.h" contains external and internal prototype declarations.
Files under the hw/ directory should only access the KVM external
API.

In order to avoid machine / device models to include "kvm_arm.h"
simply to get the QOM GIC/ITS class name, un-inline each class
name getter to the proper device model file.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230405160454.97436-4-philmd@linaro.org>
2023-06-28 14:27:59 +02:00
Philippe Mathieu-Daudé
3b295bcb32 accel: Rename HVF 'struct hvf_vcpu_state' -> AccelCPUState
We want all accelerators to share the same opaque pointer in
CPUState.

Rename the 'hvf_vcpu_state' structure as 'AccelCPUState'.

Use the generic 'accel' field of CPUState instead of 'hvf'.

Replace g_malloc0() by g_new0() for readability.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20230624174121.11508-17-philmd@linaro.org>
2023-06-28 14:14:22 +02:00
Anton Johansson
bb5de52524 target: Widen pc/cs_base in cpu_get_tb_cpu_state
Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230621135633.1649-4-anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-26 17:32:59 +02:00
Richard Henderson
7c347c7333 target/arm: Fix sve predicate store, 8 <= VQ <= 15
Brown bag time: store instead of load results in uninitialized temp.


Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1704
Reported-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620134659.817559-1-richard.henderson@linaro.org
Fixes: e6dd5e782b ("target/arm: Use tcg_gen_qemu_{ld, st}_i128 in gen_sve_{ld, st}r")
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23 11:37:29 +01:00
Richard Henderson
4315f7c614 target/arm: Restructure has_vfp_d32 test
One cannot test for feature aa32_simd_r32 without first
testing if AArch32 mode is supported at all.  This leads to

qemu-system-aarch64: ARM CPUs must have both VFP-D32 and Neon or neither

for Apple M1 cpus.

We already have a check for ARMv8-A never setting vfp-d32 true,
so restructure the code so that AArch64 avoids the test entirely.

Reported-by: Mads Ynddal <mads@ynddal.dk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Mads Ynddal <m.ynddal@samsung.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Mads Ynddal <m.ynddal@samsung.com>
Message-id: 20230619140216.402530-1-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23 11:26:26 +01:00
Richard Henderson
a834d5474e target/arm: Add cpu properties for enabling FEAT_RME
Add an x-rme cpu property to enable FEAT_RME.
Add an x-l0gptsz property to set GPCCR_EL3.L0GPTSZ,
for testing various possible configurations.

We're not currently completely sure whether FEAT_RME will
be OK to enable purely as a CPU-level property, or if it will
need board co-operation, so we're making these experimental
x- properties, so that the people developing the system
level software for RME can try to start using this and let
us know how it goes. The command line syntax for enabling
this will change in future, without backwards-compatibility.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-21-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23 11:15:48 +01:00
Richard Henderson
46f38c975f target/arm: Implement the granule protection check
Place the check at the end of get_phys_addr_with_struct,
so that we check all physical results.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-20-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23 11:15:48 +01:00
Richard Henderson
11b76fda0a target/arm: Implement GPC exceptions
Handle GPC Fault types in arm_deliver_fault, reporting as
either a GPC exception at EL3, or falling through to insn
or data aborts at various exception levels.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-19-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23 11:15:48 +01:00
Richard Henderson
f65a9bc719 target/arm: Add GPC syndrome
The function takes the fields as filled in by
the Arm ARM pseudocode for TakeGPCException.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-18-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23 11:15:47 +01:00
Richard Henderson
a5c7765202 target/arm: Use get_phys_addr_with_struct for stage2
This fixes a bug in which we failed to initialize
the result attributes properly after the memset.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-17-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23 11:15:47 +01:00
Richard Henderson
7c19b2d6d9 target/arm: Move s1_is_el0 into S1Translate
Instead of passing this to get_phys_addr_lpae, stash it
in the S1Translate structure.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-16-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23 11:15:47 +01:00
Richard Henderson
fe4a5472cc target/arm: Use get_phys_addr_with_struct in S1_ptw_translate
Do not provide a fast-path for physical addresses,
as those will need to be validated for GPC.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23 11:15:47 +01:00
Richard Henderson
4a7d7702cd target/arm: Handle no-execute for Realm and Root regimes
While Root and Realm may read and write data from other spaces,
neither may execute from other pa spaces.

This happens for Stage1 EL3, EL2, EL2&0, and Stage2 EL1&0.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23 11:15:46 +01:00
Richard Henderson
2f1ff4e7b9 target/arm: Handle Block and Page bits for security space
With Realm security state, bit 55 of a block or page descriptor during
the stage2 walk becomes the NS bit; during the stage1 walk the bit 5
NS bit is RES0.  With Root security state, bit 11 of the block or page
descriptor during the stage1 walk becomes the NSE bit.

Rather than collecting an NS bit and applying it later, compute the
output pa space from the input pa space and unconditionally assign.
This means that we no longer need to adjust the output space earlier
for the NSTable bit.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23 11:15:46 +01:00
Richard Henderson
26d1994594 target/arm: NSTable is RES0 for the RME EL3 regime
Test in_space instead of in_secure so that we don't
switch out of Root space.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23 11:15:46 +01:00
Richard Henderson
90c6629393 target/arm: Pipe ARMSecuritySpace through ptw.c
Add input and output space members to S1Translate.  Set and adjust
them in S1_ptw_translate, and the various points at which we drop
secure state.  Initialize the space in get_phys_addr; for now leave
get_phys_addr_with_secure considering only secure vs non-secure spaces.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23 11:15:45 +01:00
Richard Henderson
86a438b462 target/arm: Remove __attribute__((nonnull)) from ptw.c
This was added in 7e98e21c09 as part of a reorg in which
one of the argument had been legally NULL, and this caught
actual instances.  Now that the reorg is complete, this
serves little purpose.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23 11:15:45 +01:00
Richard Henderson
bb5cc2c860 target/arm: Introduce ARMMMUIdx_Phys_{Realm,Root}
With FEAT_RME, there are four physical address spaces.
For now, just define the symbols, and mention them in
the same spots as the other Phys indexes in ptw.c.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23 11:15:45 +01:00
Richard Henderson
d38fa9670d target/arm: Adjust the order of Phys and Stage2 ARMMMUIdx
It will be helpful to have ARMMMUIdx_Phys_* to be in the same
relative order as ARMSecuritySpace enumerators. This requires
the adjustment to the nstable check. While there, check for being
in secure state rather than rely on clearing the low bit making
no change to non-secure state.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23 11:15:45 +01:00
Richard Henderson
5d28ac0cf7 target/arm: Introduce ARMSecuritySpace
Introduce both the enumeration and functions to retrieve
the current state, and state outside of EL3.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23 11:15:44 +01:00
Richard Henderson
ef1febe758 target/arm: Add RME cpregs
This includes GPCCR, GPTBR, MFAR, the TLB flush insns PAALL, PAALLOS,
RPALOS, RPAOS, and the cache flush insns CIPAPA and CIGDPAPA.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23 11:15:44 +01:00
Richard Henderson
87bfbfe7e5 target/arm: SCR_EL3.NS may be RES1
With RME, SEL2 must also be present to support secure state.
The NS bit is RES1 if SEL2 is not present.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23 11:15:43 +01:00
Richard Henderson
aa3cc42c01 target/arm: Update SCR and HCR for RME
Define the missing SCR and HCR bits, allow SCR_NSE and {SCR,HCR}_GPF
to be set, and invalidate TLBs when NSE changes.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23 11:15:43 +01:00
Richard Henderson
b9f335c247 target/arm: Add isar_feature_aa64_rme
Add the missing field for ID_AA64PFR0, and the predicate.
Disable it if EL3 is forced off by the board or command-line.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-23 11:15:43 +01:00
Philippe Mathieu-Daudé
de6cd7599b meson: Replace softmmu_ss -> system_ss
We use the user_ss[] array to hold the user emulation sources,
and the softmmu_ss[] array to hold the system emulation ones.
Hold the latter in the 'system_ss[]' array for parity with user
emulation.

Mechanical change doing:

  $ sed -i -e s/softmmu_ss/system_ss/g $(git grep -l softmmu_ss)

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230613133347.82210-10-philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-20 10:01:30 +02:00
Peter Maydell
946ccfd590 target/arm: Convert load/store tags insns to decodetree
Convert the instructions in the load/store memory tags instruction
group to decodetree.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-21-peter.maydell@linaro.org
2023-06-19 11:23:38 +01:00
Peter Maydell
3d50721326 target/arm: Convert load/store single structure to decodetree
Convert the ASIMD load/store single structure insns to decodetree.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230602155223.2040685-20-peter.maydell@linaro.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-19 11:23:15 +01:00
Peter Maydell
e25ba1fa0b target/arm: Convert load/store (multiple structures) to decodetree
Convert the instructions in the ASIMD load/store multiple structures
instruction classes to decodetree.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-19-peter.maydell@linaro.org
2023-06-19 11:23:15 +01:00
Peter Maydell
2521b6073b target/arm: Convert LDAPR/STLR (imm) to decodetree
Convert the instructions in the LDAPR/STLR (unscaled immediate)
group to decodetree.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-18-peter.maydell@linaro.org
2023-06-19 11:23:15 +01:00
Peter Maydell
be23a049ec target/arm: Convert load (pointer auth) insns to decodetree
Convert the instructions in the load/store register (pointer
authentication) group ot decodetree: LDRAA, LDRAB.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-17-peter.maydell@linaro.org
2023-06-19 11:23:15 +01:00
Peter Maydell
54a9ab74ed target/arm: Convert atomic memory ops to decodetree
Convert the insns in the atomic memory operations group to
decodetree.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-16-peter.maydell@linaro.org
2023-06-19 11:23:15 +01:00
Peter Maydell
f36bf0c14a target/arm: Convert LDR/STR reg+reg to decodetree
Convert the LDR and STR instructions which take a register
plus register offset to decodetree.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-15-peter.maydell@linaro.org
2023-06-19 11:23:14 +01:00
Peter Maydell
61edd8f878 target/arm: Convert LDR/STR with 12-bit immediate to decodetree
Convert the LDR and STR instructions which use a 12-bit immediate
offset to decodetree. We can reuse the existing LDR and STR
trans functions for these.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-14-peter.maydell@linaro.org
2023-06-19 11:23:14 +01:00
Peter Maydell
60cd7ba9c5 target/arm: Convert ld/st reg+imm9 insns to decodetree
Convert the load and store instructions which use a 9-bit
immediate offset to decodetree.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-13-peter.maydell@linaro.org
2023-06-19 11:23:14 +01:00
Peter Maydell
8c212eb659 target/arm: Convert load/store-pair to decodetree
Convert the load/store register pair insns (LDP, STP,
LDNP, STNP, LDPSW, STGP) to decodetree.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230602155223.2040685-12-peter.maydell@linaro.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-19 11:22:18 +01:00
Peter Maydell
a752c2f459 target/arm: Convert load reg (literal) group to decodetree
Convert the "Load register (literal)" instruction class to
decodetree.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-11-peter.maydell@linaro.org
2023-06-19 11:22:18 +01:00
Peter Maydell
e8a149a359 target/arm: Convert LDXP, STXP, CASP, CAS to decodetree
Convert the load/store exclusive pair (LDXP, STXP, LDAXP, STLXP),
compare-and-swap pair (CASP, CASPA, CASPAL, CASPL), and compare-and
swap (CAS, CASA, CASAL, CASL) instructions to decodetree.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-10-peter.maydell@linaro.org
2023-06-19 11:22:18 +01:00
Peter Maydell
84693e67fa target/arm: Convert load/store exclusive and ordered to decodetree
Convert the instructions in the load/store exclusive (STXR,
STLXR, LDXR, LDAXR) and load/store ordered (STLR, STLLR,
LDAR, LDLAR) to decodetree.

Note that for STLR, STLLR, LDAR, LDLAR this fixes an under-decoding
in the legacy decoder where we were not checking that the RES1 bits
in the Rs and Rt2 fields were set.

The new function ldst_iss_sf() is equivalent to the existing
disas_ldst_compute_iss_sf(), but it takes the pre-decoded 'ext' field
rather than taking an undecoded two-bit opc field and extracting
'ext' from it. Once all the loads and stores have been converted
to decodetree disas_ldst_compute_iss_sf() will be unused and
can be deleted.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-9-peter.maydell@linaro.org
2023-06-19 11:22:18 +01:00
Peter Maydell
a97d3c18f6 target/arm: Convert exception generation instructions to decodetree
Convert the exception generation instructions SVC, HVC, SMC, BRK and
HLT to decodetree.

The old decoder decoded the halting-debug insnns DCPS1, DCPS2 and
DCPS3 just in order to then make them UNDEF; as with DRPS, we don't
bother to decode them, but document the patterns in a64.decode.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-8-peter.maydell@linaro.org
2023-06-19 11:22:18 +01:00
Peter Maydell
6e3c8049ad target/arm: Convert MSR (reg), MRS, SYS, SYSL to decodetree
Convert MSR (reg), MRS, SYS, SYSL to decodetree.  For QEMU these are
all essentially the same instruction (system register access).

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-7-peter.maydell@linaro.org
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2023-06-19 11:21:51 +01:00
Peter Maydell
45d063d163 target/arm: Convert MSR (immediate) to decodetree
Convert the MSR (immediate) insn to decodetree. Our implementation
has basically no commonality between the different destinations,
so we decode the destination register in a64.decode.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-6-peter.maydell@linaro.org
2023-06-19 11:21:50 +01:00
Peter Maydell
d78b662f28 target/arm: Convert CFINV, XAFLAG and AXFLAG to decodetree
Convert the CFINV, XAFLAG and AXFLAG insns to decodetree.
The old decoder handles these in handle_msr_i(), but
the architecture defines them as separate instructions
from MSR (immediate).

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-5-peter.maydell@linaro.org
2023-06-19 11:21:49 +01:00
Peter Maydell
afcd5df54c target/arm: Convert barrier insns to decodetree
Convert the insns in the "Barriers" instruction class to
decodetree: CLREX, DSB, DMB, ISB and SB.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-4-peter.maydell@linaro.org
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2023-06-19 11:21:35 +01:00
Peter Maydell
7fefc70661 target/arm: Convert hint instruction space to decodetree
Convert the various instructions in the hint instruction space
to decodetree.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-3-peter.maydell@linaro.org
2023-06-19 11:21:34 +01:00
Peter Maydell
68496d4172 target/arm: Consistently use finalize_memop_asimd() for ASIMD loads/stores
In the recent refactoring we missed a few places which should be
calling finalize_memop_asimd() for ASIMD loads and stores but
instead are just calling finalize_memop(); fix these.

For the disas_ldst_single_struct() and disas_ldst_multiple_struct()
cases, this is not a behaviour change because there the size
is never MO_128 and the two finalize functions do the same thing.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-19 11:21:17 +01:00
Peter Maydell
99bb43c0ff target/arm: Pass memop to gen_mte_check1_mmuidx() in reg_imm9 decode
In disas_ldst_reg_imm9() we missed one place where a call to
a gen_mte_check* function should now be passed the memop we
have created rather than just being passed the size. Fix this.

Fixes: 0a9091424d ("target/arm: Pass memop to gen_mte_check1*")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2023-06-19 11:20:19 +01:00
Peter Maydell
7e2788471f target/arm: Return correct result for LDG when ATA=0
The LDG instruction loads the tag from a memory address (identified
by [Xn + offset]), and then merges that tag into the destination
register Xt. We implemented this correctly for the case when
allocation tags are enabled, but didn't get it right when ATA=0:
instead of merging the tag bits into Xt, we merged them into the
memory address [Xn + offset] and then set Xt to that.

Merge the tag bits into the old Xt value, as they should be.

Cc: qemu-stable@nongnu.org
Fixes: c15294c1e3 ("target/arm: Implement LDG, STG, ST2G instructions")
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-19 11:20:18 +01:00
Peter Maydell
243705aa6e target/arm: Fix return value from LDSMIN/LDSMAX 8/16 bit atomics
The atomic memory operations are supposed to return the old memory
data value in the destination register.  This value is not
sign-extended, even if the operation is the signed minimum or
maximum.  (In the pseudocode for the instructions the returned data
value is passed to ZeroExtend() to create the value in the register.)

We got this wrong because we were doing a 32-to-64 zero extend on the
result for 8 and 16 bit data values, rather than the correct amount
of zero extension.

Fix the bug by using ext8u and ext16u for the MO_8 and MO_16 data
sizes rather than ext32u.

Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-2-peter.maydell@linaro.org
2023-06-19 11:20:18 +01:00
Cédric Le Goater
42bea956f6 target/arm: Allow users to set the number of VFP registers
Cortex A7 CPUs with an FPU implementing VFPv4 without NEON support
have 16 64-bit FPU registers and not 32 registers. Let users set the
number of VFP registers with a CPU property.

The primary use case of this property is for the Cortex A7 of the
Aspeed AST2600 SoC.

Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Joel Stanley <joel@jms.id.au>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
2023-06-15 18:35:58 +02:00
Richard Henderson
007cd176e5 target/arm: Only include tcg/oversized-guest.h if CONFIG_TCG
Fixes the build for --disable-tcg.

This header is only needed for cross-hosting.  Without CONFIG_TCG,
we know this is an AArch64 host, CONFIG_ATOMIC64 will be set, and
the TCG_OVERSIZED_GUEST block will never be compiled.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-07 08:35:13 -07:00
Richard Henderson
f5e6786de4 target-arm queue:
* Support gdbstub (guest debug) in HVF
  * xnlx-versal: Support CANFD controller
  * bpim2u: New board model: Banana Pi BPI-M2 Ultra
  * Emulate FEAT_LSE2
  * allow DC CVA[D]P in user mode emulation
  * trap DCC access in user mode emulation
 -----BEGIN PGP SIGNATURE-----
 
 iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmR/AKUZHHBldGVyLm1h
 eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3jzIEACNepQGY44yPhrEG+wD4WAB
 fH670KI33HcsFd2rGsC369gcssQbRIW/29reOzNhRMuol+kHI6OFaONpuKSdO0Rz
 TLVIsnT2Uq8KwbYfLtDQt5knj027amPy75d4re8wIK1eZB4dOIHysqAvQrJYeync
 9obKku8xXGLwZh/mYHoVgHcZU0cPJO9nri39n1tV3JUBsgmqEURjzbZrMcF+yMX7
 bUzOYQvC1Iedmo+aWfx43u82AlNQFz1lsqmnQj7Z5rvv0HT+BRF5WzVMP0qRh5+Z
 njkqmBH9xb9kkgeHmeMvHpWox+J+obeSmVg/4gDNlJpThmpuU0Vr7EXUN3MBQlV9
 lhyy6zrTwC/BToiQqdT2dnpao9FzXy5exfnqi/py5IuqfjAzSO+p61LlPPZ4cJri
 pCK4yq2gzQXYfrlZkUJipvRMH8Xa4IdQx+w7lXrQoJdduF4/+6aJW/GAWSu0e7eC
 zgBwaJjI7ENce8ixJnuEFUxUnaBo8dl72a0PGA1UU8PL+cJNOIpyhPk4goWQprdn
 iFF4ZnjhBRZ2gk/4HGD9u5Vo2lNqP93YS5QhkGkF+HJsBmcOZgidIUpfHhPQvvHO
 Np196T2cAETCWGV1xG4CaTpxN2ndRReq3C0/mzfhIbwhXEACtvAiSlO4KB8t6pJj
 MzinCABXHcovJbGbxZ9j6w==
 =8SdN
 -----END PGP SIGNATURE-----

Merge tag 'pull-target-arm-20230606' of https://git.linaro.org/people/pmaydell/qemu-arm into staging

target-arm queue:
 * Support gdbstub (guest debug) in HVF
 * xnlx-versal: Support CANFD controller
 * bpim2u: New board model: Banana Pi BPI-M2 Ultra
 * Emulate FEAT_LSE2
 * allow DC CVA[D]P in user mode emulation
 * trap DCC access in user mode emulation

# -----BEGIN PGP SIGNATURE-----
#
# iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmR/AKUZHHBldGVyLm1h
# eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3jzIEACNepQGY44yPhrEG+wD4WAB
# fH670KI33HcsFd2rGsC369gcssQbRIW/29reOzNhRMuol+kHI6OFaONpuKSdO0Rz
# TLVIsnT2Uq8KwbYfLtDQt5knj027amPy75d4re8wIK1eZB4dOIHysqAvQrJYeync
# 9obKku8xXGLwZh/mYHoVgHcZU0cPJO9nri39n1tV3JUBsgmqEURjzbZrMcF+yMX7
# bUzOYQvC1Iedmo+aWfx43u82AlNQFz1lsqmnQj7Z5rvv0HT+BRF5WzVMP0qRh5+Z
# njkqmBH9xb9kkgeHmeMvHpWox+J+obeSmVg/4gDNlJpThmpuU0Vr7EXUN3MBQlV9
# lhyy6zrTwC/BToiQqdT2dnpao9FzXy5exfnqi/py5IuqfjAzSO+p61LlPPZ4cJri
# pCK4yq2gzQXYfrlZkUJipvRMH8Xa4IdQx+w7lXrQoJdduF4/+6aJW/GAWSu0e7eC
# zgBwaJjI7ENce8ixJnuEFUxUnaBo8dl72a0PGA1UU8PL+cJNOIpyhPk4goWQprdn
# iFF4ZnjhBRZ2gk/4HGD9u5Vo2lNqP93YS5QhkGkF+HJsBmcOZgidIUpfHhPQvvHO
# Np196T2cAETCWGV1xG4CaTpxN2ndRReq3C0/mzfhIbwhXEACtvAiSlO4KB8t6pJj
# MzinCABXHcovJbGbxZ9j6w==
# =8SdN
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 06 Jun 2023 02:47:17 AM PDT
# gpg:                using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg:                issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [full]
# gpg:                 aka "Peter Maydell <pmaydell@gmail.com>" [full]
# gpg:                 aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [full]
# gpg:                 aka "Peter Maydell <peter@archaic.org.uk>" [unknown]

* tag 'pull-target-arm-20230606' of https://git.linaro.org/people/pmaydell/qemu-arm: (42 commits)
  target/arm: trap DCC access in user mode emulation
  tests/tcg/aarch64: add DC CVA[D]P tests
  target/arm: allow DC CVA[D]P in user mode emulation
  target/arm: Enable FEAT_LSE2 for -cpu max
  tests/tcg/multiarch: Adjust sigbus.c
  tests/tcg/aarch64: Use stz2g in mte-7.c
  target/arm: Move mte check for store-exclusive
  target/arm: Relax ordered/atomic alignment checks for LSE2
  target/arm: Add SCTLR.nAA to TBFLAG_A64
  target/arm: Check alignment in helper_mte_check
  target/arm: Pass single_memop to gen_mte_checkN
  target/arm: Pass memop to gen_mte_check1*
  target/arm: Hoist finalize_memop out of do_fp_{ld, st}
  target/arm: Hoist finalize_memop out of do_gpr_{ld, st}
  target/arm: Load/store integer pair with one tcg operation
  target/arm: Sink gen_mte_check1 into load/store_exclusive
  target/arm: Use tcg_gen_qemu_{ld, st}_i128 in gen_sve_{ld, st}r
  target/arm: Use tcg_gen_qemu_st_i128 for STZG, STZ2G
  target/arm: Use tcg_gen_qemu_{st, ld}_i128 for do_fp_{st, ld}
  target/arm: Use tcg_gen_qemu_ld_i128 for LDXP
  ...

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-06 12:11:34 -07:00
Zhuojia Shen
f9ac778898 target/arm: trap DCC access in user mode emulation
Accessing EL0-accessible Debug Communication Channel (DCC) registers in
user mode emulation is currently enabled.  However, it does not match
Linux behavior as Linux sets MDSCR_EL1.TDCC on startup to disable EL0
access to DCC (see __cpu_setup() in arch/arm64/mm/proc.S).

This patch fixes access_tdcc() to check MDSCR_EL1.TDCC for EL0 and sets
MDSCR_EL1.TDCC for user mode emulation to match Linux.

Signed-off-by: Zhuojia Shen <chaosdefinition@hotmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: DS7PR12MB630905198DD8E69F6817544CAC4EA@DS7PR12MB6309.namprd12.prod.outlook.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-06 10:19:40 +01:00
Zhuojia Shen
cd4a47f907 target/arm: allow DC CVA[D]P in user mode emulation
DC CVAP and DC CVADP instructions can be executed in EL0 on Linux,
either directly when SCTLR_EL1.UCI == 1 or emulated by the kernel (see
user_cache_maint_handler() in arch/arm64/kernel/traps.c).

This patch enables execution of the two instructions in user mode
emulation.

Signed-off-by: Zhuojia Shen <chaosdefinition@hotmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-06 10:19:40 +01:00
Richard Henderson
59b6b42cd3 target/arm: Enable FEAT_LSE2 for -cpu max
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230530191438.411344-21-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-06 10:19:39 +01:00
Richard Henderson
5096ec5b32 target/arm: Move mte check for store-exclusive
Push the mte check behind the exclusive_addr check.
Document the several ways that we are still out of spec
with this implementation.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230530191438.411344-18-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-06 10:19:39 +01:00
Richard Henderson
c1a1f80518 target/arm: Relax ordered/atomic alignment checks for LSE2
FEAT_LSE2 only requires that atomic operations not cross a
16-byte boundary.  Ordered operations may be completely
unaligned if SCTLR.nAA is set.

Because this alignment check is so special, do it by hand.
Make sure not to keep TCG temps live across the branch.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230530191438.411344-17-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-06 10:19:38 +01:00
Richard Henderson
83f624d9ba target/arm: Add SCTLR.nAA to TBFLAG_A64
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230530191438.411344-16-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-06 10:19:38 +01:00
Richard Henderson
523da6b963 target/arm: Check alignment in helper_mte_check
Fixes a bug in that with SCTLR.A set, we should raise any
alignment fault before raising any MTE check fault.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230530191438.411344-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-06 10:19:38 +01:00
Richard Henderson
3b97520c86 target/arm: Pass single_memop to gen_mte_checkN
Pass the individual memop to gen_mte_checkN.
For the moment, do nothing with it.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230530191438.411344-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-06 10:19:37 +01:00
Richard Henderson
0a9091424d target/arm: Pass memop to gen_mte_check1*
Pass the completed memop to gen_mte_check1_mmuidx.
For the moment, do nothing more than extract the size.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230530191438.411344-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-06 10:19:37 +01:00
Richard Henderson
03176bcd03 target/arm: Hoist finalize_memop out of do_fp_{ld, st}
We are going to need the complete memop beforehand,
so let's not compute it twice.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230530191438.411344-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-06 10:19:37 +01:00
Richard Henderson
a75b66f617 target/arm: Hoist finalize_memop out of do_gpr_{ld, st}
We are going to need the complete memop beforehand,
so let's not compute it twice.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230530191438.411344-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-06 10:19:37 +01:00
Richard Henderson
6f47e7c189 target/arm: Load/store integer pair with one tcg operation
This is required for LSE2, where the pair must be treated atomically if
it does not cross a 16-byte boundary.  But it simplifies the code to do
this always.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230530191438.411344-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-06-06 10:19:36 +01:00