Commit Graph

872 Commits

Author SHA1 Message Date
Jinjie Ruan
e4eb290571 target/arm: Handle NMI in arm_cpu_do_interrupt_aarch64()
According to Arm GIC section 4.6.3 Interrupt superpriority, the interrupt
with superpriority is always IRQ, never FIQ, so the NMI exception trap entry
behave like IRQ. And VINMI(vIRQ with Superpriority) can be raised from the
GIC or come from the hcrx_el2.HCRX_VINMI bit, VFNMI(vFIQ with Superpriority)
come from the hcrx_el2.HCRX_VFNMI bit.

Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240407081733.3231820-13-ruanjinjie@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-04-25 10:21:05 +01:00
Jinjie Ruan
167f2631df target/arm: Handle PSTATE.ALLINT on taking an exception
Set or clear PSTATE.ALLINT on taking an exception to ELx according to the
SCTLR_ELx.SPINTMASK bit.

Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240407081733.3231820-10-ruanjinjie@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-04-25 10:21:05 +01:00
Jinjie Ruan
2e0be5f6b1 target/arm: Handle IS/FS in ISR_EL1 for NMI, VINMI and VFNMI
Add IS and FS bit in ISR_EL1 and handle the read. With CPU_INTERRUPT_NMI or
CPU_INTERRUPT_VINMI, both CPSR_I and ISR_IS must be set. With
CPU_INTERRUPT_VFNMI, both CPSR_F and ISR_FS must be set.

Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240407081733.3231820-9-ruanjinjie@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-04-25 10:21:05 +01:00
Jinjie Ruan
963e4e3648 target/arm: Add support for NMI in arm_phys_excp_target_el()
According to Arm GIC section 4.6.3 Interrupt superpriority, the interrupt
with superpriority is always IRQ, never FIQ, so handle NMI same as IRQ in
arm_phys_excp_target_el().

Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240407081733.3231820-8-ruanjinjie@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-04-25 10:21:05 +01:00
Jinjie Ruan
b36a32ead1 target/arm: Add support for Non-maskable Interrupt
This only implements the external delivery method via the GICv3.

Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240407081733.3231820-7-ruanjinjie@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-04-25 10:21:04 +01:00
Jinjie Ruan
5c21697461 target/arm: Support MSR access to ALLINT
Support ALLINT msr access as follow:
	mrs <xt>, ALLINT	// read allint
	msr ALLINT, <xt>	// write allint with imm

Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240407081733.3231820-6-ruanjinjie@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-04-25 10:21:04 +01:00
Jinjie Ruan
2b0d2ab895 target/arm: Handle HCR_EL2 accesses for bits introduced with FEAT_NMI
FEAT_NMI defines another three new bits in HCRX_EL2: TALLINT, HCRX_VINMI and
HCRX_VFNMI. When the feature is enabled, allow these bits to be written in
HCRX_EL2.

Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240407081733.3231820-2-ruanjinjie@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-04-25 10:21:04 +01:00
Peter Maydell
19b254e86a target/arm: Use correct SecuritySpace for AArch64 AT ops at EL3
When we do an AT address translation operation, the page table walk
is supposed to be performed in the context of the EL we're doing the
walk for, so for instance an AT S1E2R walk is done for EL2.  In the
pseudocode an EL is passed to AArch64.AT(), which calls
SecurityStateAtEL() to find the security state that we should be
doing the walk with.

In ats_write64() we get this wrong, instead using the current
security space always.  This is fine for AT operations performed from
EL1 and EL2, because there the current security state and the
security state for the lower EL are the same.  But for AT operations
performed from EL3, the current security state is always either
Secure or Root, whereas we want to use the security state defined by
SCR_EL3.{NS,NSE} for the walk. This affects not just guests using
FEAT_RME but also ones where EL3 is Secure state and the EL3 code
is trying to do an AT for a NonSecure EL2 or EL1.

Use arm_security_space_below_el3() to get the SecuritySpace to
pass to do_ats_write() for all AT operations except the
AT S1E3* operations.

Cc: qemu-stable@nongnu.org
Fixes: e1ee56ec23 ("target/arm: Pass security space rather than flag for AT instructions")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2250
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240405180232.3570066-1-peter.maydell@linaro.org
2024-04-08 15:38:53 +01:00
Pierre-Clément Tosi
9ed866e10f target/arm: Fix CNTPOFF_EL2 trap to missing EL3
EL2 accesses to CNTPOFF_EL2 should only ever trap to EL3 if EL3 is
present, as described by the reference manual (for MRS):

  /* ... */
  elsif PSTATE.EL == EL2 then
      if Halted() && HaveEL(EL3) && /*...*/ then
          UNDEFINED;
      elsif HaveEL(EL3) && SCR_EL3.ECVEn == '0' then
          /* ... */
      else
          X[t, 64] = CNTPOFF_EL2;

However, the existing implementation of gt_cntpoff_access() always
returns CP_ACCESS_TRAP_EL3 for EL2 accesses with SCR_EL3.ECVEn unset. In
pseudo-code terminology, this corresponds to assuming that HaveEL(EL3)
is always true, which is wrong. As a result, QEMU panics in
access_check_cp_reg() when started without EL3 and running EL2 code
accessing the register (e.g. any recent KVM booting a guest).

Therefore, add the HaveEL(EL3) check to gt_cntpoff_access().

Fixes: 2808d3b38a ("target/arm: Implement FEAT_ECV CNTPOFF_EL2 handling")
Signed-off-by: Pierre-Clément Tosi <ptosi@google.com>
Message-id: m3al6amhdkmsiy2f62w72ufth6dzn45xg5cz6xljceyibphnf4@ezmmpwk4tnhl
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-04-05 15:21:56 +01:00
Peter Maydell
2808d3b38a target/arm: Implement FEAT_ECV CNTPOFF_EL2 handling
When ID_AA64MMFR0_EL1.ECV is 0b0010, a new register CNTPOFF_EL2 is
implemented.  This is similar to the existing CNTVOFF_EL2, except
that it controls a hypervisor-adjustable offset made to the physical
counter and timer.

Implement the handling for this register, which includes control/trap
bits in SCR_EL3 and CNTHCTL_EL2.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240301183219.2424889-8-peter.maydell@linaro.org
2024-03-07 12:19:03 +00:00
Peter Maydell
485eb324e3 target/arm: Define CNTPCTSS_EL0 and CNTVCTSS_EL0
For FEAT_ECV, new registers CNTPCTSS_EL0 and CNTVCTSS_EL0 are
defined, which are "self-synchronized" views of the physical and
virtual counts as seen in the CNTPCT_EL0 and CNTVCT_EL0 registers
(meaning that no barriers are needed around accesses to them to
ensure that reads of them do not occur speculatively and out-of-order
with other instructions).

For QEMU, all our system registers are self-synchronized, so we can
simply copy the existing implementation of CNTPCT_EL0 and CNTVCT_EL0
to the new register encodings.

This means we now implement all the functionality required for
ID_AA64MMFR0_EL1.ECV == 0b0001.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240301183219.2424889-7-peter.maydell@linaro.org
2024-03-07 12:19:03 +00:00
Peter Maydell
dcdad2624b target/arm: Implement new FEAT_ECV trap bits
The functionality defined by ID_AA64MMFR0_EL1.ECV == 1 is:
 * four new trap bits for various counter and timer registers
 * the CNTHCTL_EL2.EVNTIS and CNTKCTL_EL1.EVNTIS bits which control
   scaling of the event stream. This is a no-op for us, because we don't
   implement the event stream (our WFE is a NOP): all we need to do is
   allow CNTHCTL_EL2.ENVTIS to be read and written.
 * extensions to PMSCR_EL1.PCT, PMSCR_EL2.PCT, TRFCR_EL1.TS and
   TRFCR_EL2.TS: these are all no-ops for us, because we don't implement
   FEAT_SPE or FEAT_TRF.
 * new registers CNTPCTSS_EL0 and NCTVCTSS_EL0 which are
   "self-sychronizing" views of the CNTPCT_EL0 and CNTVCT_EL0, meaning
   that no barriers are needed around their accesses. For us these
   are just the same as the normal views, because all our sysregs are
   inherently self-sychronizing.

In this commit we implement the trap handling and permit the new
CNTHCTL_EL2 bits to be written.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240301183219.2424889-6-peter.maydell@linaro.org
2024-03-07 12:19:02 +00:00
Peter Maydell
a681d66e95 target/arm: Don't allow RES0 CNTHCTL_EL2 bits to be written
Don't allow the guest to write CNTHCTL_EL2 bits which don't exist.
This is not strictly architecturally required, but it is how we've
tended to implement registers more recently.

In particular, bits [19:18] are only present with FEAT_RME,
and bits [17:12] will only be present with FEAT_ECV.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240301183219.2424889-5-peter.maydell@linaro.org
2024-03-07 12:19:02 +00:00
Peter Maydell
c6b0ecb236 target/arm: use FIELD macro for CNTHCTL bit definitions
We prefer the FIELD macro over ad-hoc #defines for register bits;
switch CNTHCTL to that style before we add any more bits.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240301183219.2424889-4-peter.maydell@linaro.org
2024-03-07 12:19:01 +00:00
Peter Maydell
1e8d14037b target/arm: Timer _EL02 registers UNDEF for E2H == 0
The timer _EL02 registers should UNDEF for invalid accesses from EL2
or EL3 when HCR_EL2.E2H == 0, not take a cp access trap.  We were
delivering the exception to EL2 with the wrong syndrome.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240301183219.2424889-3-peter.maydell@linaro.org
2024-03-07 12:19:01 +00:00
Peter Maydell
b2f24983db target/arm: Use new CBAR encoding for all v8 CPUs, not all aarch64 CPUs
We support two different encodings for the AArch32 IMPDEF
CBAR register -- older cores like the Cortex A9, A7, A15
have this at 4, c15, c0, 0; newer cores like the
Cortex A35, A53, A57 and A72 have it at 1 c15 c0 0.

When we implemented this we picked which encoding to
use based on whether the CPU set ARM_FEATURE_AARCH64.
However this isn't right for three cases:
 * the qemu-system-arm 'max' CPU, which is supposed to be
   a variant on a Cortex-A57; it ought to use the same
   encoding the A57 does and which the AArch64 'max'
   exposes to AArch32 guest code
 * the Cortex-R52, which is AArch32-only but has the CBAR
   at the newer encoding (and where we incorrectly are
   not yet setting ARM_FEATURE_CBAR_RO anyway)
 * any possible future support for other v8 AArch32
   only CPUs, or for supporting "boot the CPU into
   AArch32 mode" on our existing cores like the A57 etc

Make the decision of the encoding be based on whether
the CPU implements the ARM_FEATURE_V8 flag instead.

This changes the behaviour only for the qemu-system-arm
'-cpu max'. We don't expect anybody to be relying on the
old behaviour because:
 * it's not what the real hardware Cortex-A57 does
   (and that's what our ID register claims we are)
 * we don't implement the memory-mapped GICv3 support
   which is the only thing that exists at the peripheral
   base address pointed to by the register

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240206132931.38376-2-peter.maydell@linaro.org
2024-02-15 14:32:38 +00:00
Peter Maydell
ac1d88e9e7 target/arm: Don't get MDCR_EL2 in pmu_counter_enabled() before checking ARM_FEATURE_PMU
It doesn't make sense to read the value of MDCR_EL2 on a non-A-profile
CPU, and in fact if you try to do it we will assert:

#6  0x00007ffff4b95e96 in __GI___assert_fail
    (assertion=0x5555565a8c70 "!arm_feature(env, ARM_FEATURE_M)", file=0x5555565a6e5c "../../target/arm/helper.c", line=12600, function=0x5555565a9560 <__PRETTY_FUNCTION__.0> "arm_security_space_below_el3") at ./assert/assert.c:101
#7  0x0000555555ebf412 in arm_security_space_below_el3 (env=0x555557bc8190) at ../../target/arm/helper.c:12600
#8  0x0000555555ea6f89 in arm_is_el2_enabled (env=0x555557bc8190) at ../../target/arm/cpu.h:2595
#9  0x0000555555ea942f in arm_mdcr_el2_eff (env=0x555557bc8190) at ../../target/arm/internals.h:1512

We might call pmu_counter_enabled() on an M-profile CPU (for example
from the migration pre/post hooks in machine.c); this should always
return false because these CPUs don't set ARM_FEATURE_PMU.

Avoid the assertion by not calling arm_mdcr_el2_eff() before we
have done the early return for "PMU not present".

This fixes an assertion failure if you try to do a loadvm or
savevm for an M-profile board.

Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2155
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240208153346.970021-1-peter.maydell@linaro.org
2024-02-15 11:36:42 +00:00
Peter Maydell
10eab96e1a tests/tcg: Fix multiarch/gdbstub/prot-none.py
hw/core: Convert cpu_mmu_index to a CPUClass hook
 tcg/loongarch64: Set vector registers call clobbered
 target/sparc: floating-point cleanup
 linux-user/aarch64: Add padding before __kernel_rt_sigreturn
 -----BEGIN PGP SIGNATURE-----
 
 iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmW95WkdHHJpY2hhcmQu
 aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV/p+Qf/eVmh5q0pZqcur7ft
 8FO0wlIz55OfhaA9MIpH7LEIHRKY37Ybebw2K6SPnx4FmPhLkaj4KXPPjT2nzdXw
 J2nQM+TOyxOd18GG8P80qFQ1a72dj8VSIRVAl9T46KuPXS5B7luArImfBlUk/GwV
 Qr/XkOPwVTp05E/ccMJ8PMlcVZw9osHVLqsaFVbsUv/FylTmstzA9c5Gw7/FTfkG
 T2rk+7go+F4IXs/9uQuuFMOpQOZngXE621hnro+qle7j9oarEUVJloAgVn06o59O
 fUjuoKO0aMCr2iQqNJTH7Dnqp5OIzzxUoXiNTOj0EimwWfAcUKthoFO2LGcy1/ew
 wWNR/Q==
 =e3J3
 -----END PGP SIGNATURE-----

Merge tag 'pull-tcg-20240202-2' of https://gitlab.com/rth7680/qemu into staging

tests/tcg: Fix multiarch/gdbstub/prot-none.py
hw/core: Convert cpu_mmu_index to a CPUClass hook
tcg/loongarch64: Set vector registers call clobbered
target/sparc: floating-point cleanup
linux-user/aarch64: Add padding before __kernel_rt_sigreturn

# -----BEGIN PGP SIGNATURE-----
#
# iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmW95WkdHHJpY2hhcmQu
# aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV/p+Qf/eVmh5q0pZqcur7ft
# 8FO0wlIz55OfhaA9MIpH7LEIHRKY37Ybebw2K6SPnx4FmPhLkaj4KXPPjT2nzdXw
# J2nQM+TOyxOd18GG8P80qFQ1a72dj8VSIRVAl9T46KuPXS5B7luArImfBlUk/GwV
# Qr/XkOPwVTp05E/ccMJ8PMlcVZw9osHVLqsaFVbsUv/FylTmstzA9c5Gw7/FTfkG
# T2rk+7go+F4IXs/9uQuuFMOpQOZngXE621hnro+qle7j9oarEUVJloAgVn06o59O
# fUjuoKO0aMCr2iQqNJTH7Dnqp5OIzzxUoXiNTOj0EimwWfAcUKthoFO2LGcy1/ew
# wWNR/Q==
# =e3J3
# -----END PGP SIGNATURE-----
# gpg: Signature made Sat 03 Feb 2024 07:04:09 GMT
# gpg:                using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg:                issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full]
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A  05C0 64DF 38E8 AF7E 215F

* tag 'pull-tcg-20240202-2' of https://gitlab.com/rth7680/qemu: (58 commits)
  linux-user/aarch64: Add padding before __kernel_rt_sigreturn
  target/sparc: Remove FSR_FTT_NMASK, FSR_FTT_CEXC_NMASK
  target/sparc: Split fcc out of env->fsr
  target/sparc: Remove cpu_fsr
  target/sparc: Split cexc and ftt from env->fsr
  target/sparc: Merge check_ieee_exceptions with FPop helpers
  target/sparc: Clear cexc and ftt in do_check_ieee_exceptions
  target/sparc: Split ver from env->fsr
  target/sparc: Introduce cpu_get_fsr, cpu_put_fsr
  target/sparc: Remove qt0, qt1 temporaries
  target/sparc: Use i128 for Fdmulq
  target/sparc: Use i128 for FdTOq, FxTOq
  target/sparc: Use i128 for FsTOq, FiTOq
  target/sparc: Use i128 for FCMPq, FCMPEq
  target/sparc: Use i128 for FqTOd, FqTOx
  target/sparc: Use i128 for FqTOs, FqTOi
  target/sparc: Use i128 for FADDq, FSUBq, FMULq, FDIVq
  target/sparc: Use i128 for FSQRTq
  target/sparc: Inline FNEG, FABS
  target/sparc: Introduce gen_{load,store}_fpr_Q
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-02-03 13:31:45 +00:00
Richard Henderson
b7770d72f5 target/arm: Split out arm_env_mmu_index
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-02-03 08:52:25 +10:00
Peter Maydell
9f2e8ac090 target/arm: Add ID_AA64ZFR0_EL1.B16B16 to the exposed-to-userspace set
In kernel commit 5d5b4e8c2d9ec ("arm64/sve: Report FEAT_SVE_B16B16 to
userspace") Linux added ID_AA64ZFR0_el1.B16B16 to the set of ID
register fields which it exposes to userspace.  Update our
exported_bits mask to include this.

(This doesn't yet change any behaviour for us, because we don't yet
have any CPUs that implement this feature, which is part of SVE2.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240125134304.1470404-1-peter.maydell@linaro.org
2024-02-02 13:51:57 +00:00
Jan Klötzke
f670be1aad target/arm: fix exception syndrome for AArch32 bkpt insn
Debug exceptions that target AArch32 Hyp mode are reported differently
than on AAarch64. Internally, Qemu uses the AArch64 syndromes. Therefore
such exceptions need to be either converted to a prefetch abort
(breakpoints, vector catch) or a data abort (watchpoints).

Cc: qemu-stable@nongnu.org
Signed-off-by: Jan Klötzke <jan.kloetzke@kernkonzept.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240127202758.3326381-1-jan.kloetzke@kernkonzept.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-02-02 13:51:57 +00:00
Philippe Mathieu-Daudé
f4f318b41a target/arm: Move GTimer definitions to new 'gtimer.h' header
Move Arm A-class Generic Timer definitions to the new
"target/arm/gtimer.h" header so units in hw/ which don't
need access to ARMCPU internals can use them without
having to include the huge "cpu.h".

Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240118200643.29037-20-philmd@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-01-26 11:30:49 +00:00
Philippe Mathieu-Daudé
32b3a0c900 target/arm: Move e2h_access() helper around
e2h_access() was added in commit bb5972e439 ("target/arm:
Add VHE timer register redirection and aliasing") close to
the generic_timer_cp_reginfo[] array, but isn't used until
vhe_reginfo[] definition. Move it closer to the other e2h
helpers.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240118200643.29037-19-philmd@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-01-26 11:30:49 +00:00
Philippe Mathieu-Daudé
2412813286 target/arm: Ensure icount is enabled when emulating INST_RETIRED
pmu_init() register its event checking the pm_event::supported()
handler. For INST_RETIRED, the event is only registered and the
bit enabled in the PMU Common Event Identification register when
icount is enabled as ICOUNT_PRECISE.

PMU events are TCG-only, hardware accelerators handle them
directly. Unfortunately we register the events in non-TCG builds,
leading to linking error such:

  ld: Undefined symbols:
    _icount_to_ns, referenced from:
      _instructions_ns_per in target_arm_helper.c.o
  clang: error: linker command failed with exit code 1 (use -v to see invocation)

As a kludge, give a hint to the compiler by asserting the
pm_event::get_count() and pm_event::ns_per_count() handler will
only be called under this icount mode.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20231208113529.74067-5-philmd@linaro.org>
2024-01-19 12:28:59 +01:00
Philippe Mathieu-Daudé
8e98c27daa system/cpu-timers: Introduce ICountMode enumerator
Rather than having to lookup for what the 0, 1, 2, ...
icount values are, use a enum definition.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20231208113529.74067-4-philmd@linaro.org>
2024-01-19 12:28:59 +01:00
Peter Maydell
3b32140e70 target/arm: Enhance CPU_LOG_INT to show SPSR on AArch64 exception-entry
We already print various lines of information when we take an
exception, including the ELR and (if relevant) the FAR. Now
that FEAT_NV means that we might report something other than
the old PSTATE to the guest as the SPSR, it's worth logging
this as well.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:44:45 +00:00
Peter Maydell
f5bd261a61 target/arm: Mark up VNCR offsets (offsets >= 0x200, except GIC)
Mark up the cpreginfo structs to indicate offsets for system
registers from VNCR_EL2, as defined in table D8-66 in rule R_CSRPQ in
the Arm ARM.  This covers all the remaining offsets at 0x200 and
above, except for the GIC ICH_* registers.

(Note that because we don't implement FEAT_SPE, FEAT_TRF,
FEAT_MPAM, FEAT_BRBE or FEAT_AMUv1p1 we don't implement any
of the registers that use offsets at 0x800 and above.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:44:45 +00:00
Peter Maydell
46932cf26e target/arm: Mark up VNCR offsets (offsets 0x168..0x1f8)
Mark up the cpreginfo structs to indicate offsets for system
registers from VNCR_EL2, as defined in table D8-66 in rule R_CSRPQ in
the Arm ARM.  This commit covers offsets 0x168 to 0x1f8.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:44:45 +00:00
Peter Maydell
bb7b95b070 target/arm: Mark up VNCR offsets (offsets 0x100..0x160)
Mark up the cpreginfo structs to indicate offsets for system
registers from VNCR_EL2, as defined in table D8-66 in rule R_CSRPQ in
the Arm ARM.  This commit covers offsets 0x100 to 0x160.

Many (but not all) of the registers in this range have _EL12 aliases,
and the slot in memory is shared between the _EL12 version of the
register and the _EL1 version.  Where we programmatically generate
the regdef for the _EL12 register, arrange that its
nv2_redirect_offset is set up correctly to do this.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:44:45 +00:00
Peter Maydell
dfe8a9ee6a target/arm: Mark up VNCR offsets (offsets 0x0..0xff)
Mark up the cpreginfo structs to indicate offsets for system
registers from VNCR_EL2, as defined in table D8-66 in rule R_CSRPQ in
the Arm ARM. This commit covers offsets below 0x100; all of these
registers are redirected to memory regardless of the value of
HCR_EL2.NV1.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:44:45 +00:00
Peter Maydell
c35da11df4 target/arm: Handle FEAT_NV2 redirection of SPSR_EL2, ELR_EL2, ESR_EL2, FAR_EL2
Under FEAT_NV2, when HCR_EL2.{NV,NV2} == 0b11 at EL1, accesses to the
registers SPSR_EL2, ELR_EL2, ESR_EL2, FAR_EL2 and TFSR_EL2 (which
would UNDEF without FEAT_NV or FEAT_NV2) should instead access the
equivalent EL1 registers SPSR_EL1, ELR_EL1, ESR_EL1, FAR_EL1 and
TFSR_EL1.

Because there are only five registers involved and the encoding for
the EL1 register is identical to that of the EL2 register except
that opc1 is 0, we handle this by finding the EL1 register in the
hash table and using it instead.

Note that traps that apply to direct accesses to the EL1 register,
such as active fine-grained traps or other trap bits, do not trigger
when it is accessed via the EL2 encoding in this way.  However, some
traps that are defined by the EL2 register may apply.  We therefore
call the EL2 register's accessfn first.  The only one of the five
which has such traps is TFSR_EL2: make sure its accessfn correctly
handles both FEAT_NV (where we trap to EL2 without checking ATA bits)
and FEAT_NV2 (where we check ATA bits and then redirect to TFSR_EL1).

(We don't need the NV1 tbflag bit until the next patch, but we
introduce it here to avoid putting the NV, NV1, NV2 bits in an
odd order.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:53 +00:00
Peter Maydell
ef8a4a8816 target/arm: Handle FEAT_NV2 changes to when SPSR_EL1.M reports EL2
With FEAT_NV2, the condition for when SPSR_EL1.M should report that
an exception was taken from EL2 changes.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:52 +00:00
Peter Maydell
b5ba6c99a8 target/arm: Implement VNCR_EL2 register
For FEAT_NV2, a new system register VNCR_EL2 holds the base
address of the memory which nested-guest system register
accesses are redirected to. Implement this register.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:52 +00:00
Peter Maydell
a13cd25d9b target/arm: Handle HCR_EL2 accesses for FEAT_NV2 bits
FEAT_NV2 defines another new bit in HCR_EL2: NV2. When the
feature is enabled, allow this bit to be written in HCR_EL2.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:52 +00:00
Peter Maydell
f11440b426 target/arm: Don't honour PSTATE.PAN when HCR_EL2.{NV, NV1} == {1, 1}
For FEAT_NV, when HCR_EL2.{NV,NV1} is {1,1} PAN is always disabled
even when the PSTATE.PAN bit is set. Implement this by having
arm_pan_enabled() return false in this situation.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:50 +00:00
Peter Maydell
7fda076357 target/arm: Always use arm_pan_enabled() when checking if PAN is enabled
Currently the code in target/arm/helper.c mostly checks the PAN bits
in env->pstate or env->uncached_cpsr directly when it wants to know
if PAN is enabled, because in most callsites we know whether we are
in AArch64 or AArch32. We do have an arm_pan_enabled() function, but
we only use it in a few places where the code might run in either an
AArch32 or AArch64 context.

For FEAT_NV, when HCR_EL2.{NV,NV1} is {1,1} PAN is always disabled
even when the PSTATE.PAN bit is set, the "is PAN enabled" test
becomes more complicated. Make all places that check for PAN use
arm_pan_enabled(), so we have a place to put the FEAT_NV test.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:50 +00:00
Peter Maydell
ad4e2d4db1 target/arm: Trap registers when HCR_EL2.{NV, NV1} == {1, 1}
When HCR_EL2.{NV,NV1} is {1,1} we must trap five extra registers to
EL2: VBAR_EL1, ELR_EL1, SPSR_EL1, SCXTNUM_EL1 and TFSR_EL1.
Implement these traps.

This trap does not apply when FEAT_NV2 is implemented and enabled;
include the check that HCR_EL2.NV2 is 0 here, to save us having
to come back and add it later.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:49 +00:00
Peter Maydell
29eda9cd19 target/arm: Set SPSR_EL1.M correctly when nested virt is enabled
FEAT_NV requires that when HCR_EL2.{NV,NV1} == {1,0} and an exception
is taken from EL1 to EL1 then the reported EL in SPSR_EL1.M should be
EL2, not EL1.  Implement this behaviour.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:49 +00:00
Peter Maydell
83aea11db0 target/arm: Make EL2 cpreg accessfns safe for FEAT_NV EL1 accesses
FEAT_NV and FEAT_NV2 will allow EL1 to attempt to access cpregs that
only exist at EL2. This means we're going to want to run their
accessfns when the CPU is at EL1. In almost all cases, the behaviour
we want is "the accessfn returns OK if at EL1".

Mostly the accessfn already does the right thing; in a few cases we
need to explicitly check that the EL is not 1 before applying various
trap controls, or split out an accessfn used both for an _EL1 and an
_EL2 register into two so we can handle the FEAT_NV case correctly
for the _EL2 register.

There are two registers where we want the accessfn to trap for
a FEAT_NV EL1 access: VSTTBR_EL2 and VSTCR_EL2 should UNDEF
an access from NonSecure EL1, not trap to EL2 under FEAT_NV.
The way we have written sel2_access() already results in this
behaviour.

We can identify the registers we care about here because they
all have opc1 == 4 or 5.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:47 +00:00
Peter Maydell
e730287cef target/arm: *_EL12 registers should UNDEF when HCR_EL2.E2H is 0
The alias registers like SCTLR_EL12 only exist when HCR_EL2.E2H
is 1; they should UNDEF otherwise. We weren't implementing this.
Add an intercept of the accessfn for these aliases, and implement
the UNDEF check.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:47 +00:00
Peter Maydell
6f53b1267b target/arm: Record correct opcode fields in cpreg for E2H aliases
For FEAT_VHE, we define a set of register aliases, so that for instance:
 * the SCTLR_EL1 either accesses the real SCTLR_EL1, or (if E2H is 1)
   SCTLR_EL2
 * a new SCTLR_EL12 register accesses SCTLR_EL1 if E2H is 1

However when we create the 'new_reg' cpreg struct for the SCTLR_EL12
register, we duplicate the information in the SCTLR_EL1 cpreg, which
means the opcode fields are those of SCTLR_EL1, not SCTLR_EL12.  This
is a problem for code which looks at the cpreg opcode fields to
determine behaviour (e.g.  in access_check_cp_reg()). In practice
the current checks we do there don't intersect with the *_EL12
registers, but for FEAT_NV this will become a problem.

Write the correct values from the encoding into the new_reg struct.
This restores the invariant that the cpreg that you get back
from the hashtable has opcode fields that match the key you used
to retrieve it.

When we call the readfn or writefn for the target register, we
pass it the cpreg struct for that target register, not the one
for the alias, in case the readfn/writefn want to look at the
opcode fields to determine behaviour. This means we need to
interpose custom read/writefns for the e12 aliases.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:46 +00:00
Peter Maydell
5725977915 target/arm: Implement HCR_EL2.AT handling
The FEAT_NV HCR_EL2.AT bit enables trapping of some address
translation instructions from EL1 to EL2.  Implement this behaviour.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:45 +00:00
Peter Maydell
67e55c73c3 target/arm: Handle HCR_EL2 accesses for bits introduced with FEAT_NV
FEAT_NV defines three new bits in HCR_EL2: NV, NV1 and AT.  When the
feature is enabled, allow these bits to be written, and flush the
TLBs for the bits which affect page table interpretation.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:44 +00:00
Stefan Hajnoczi
a4a411fbaf Replace "iothread lock" with "BQL" in comments
The term "iothread lock" is obsolete. The APIs use Big QEMU Lock (BQL)
in their names. Update the code comments to use "BQL" instead of
"iothread lock".

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Paul Durrant <paul@xen.org>
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Message-id: 20240102153529.486531-5-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2024-01-08 10:45:43 -05:00
Stefan Hajnoczi
195801d700 system/cpus: rename qemu_mutex_lock_iothread() to bql_lock()
The Big QEMU Lock (BQL) has many names and they are confusing. The
actual QemuMutex variable is called qemu_global_mutex but it's commonly
referred to as the BQL in discussions and some code comments. The
locking APIs, however, are called qemu_mutex_lock_iothread() and
qemu_mutex_unlock_iothread().

The "iothread" name is historic and comes from when the main thread was
split into into KVM vcpu threads and the "iothread" (now called the main
loop thread). I have contributed to the confusion myself by introducing
a separate --object iothread, a separate concept unrelated to the BQL.

The "iothread" name is no longer appropriate for the BQL. Rename the
locking APIs to:
- void bql_lock(void)
- void bql_unlock(void)
- bool bql_locked(void)

There are more APIs with "iothread" in their names. Subsequent patches
will rename them. There are also comments and documentation that will be
updated in later patches.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Acked-by: Fabiano Rosas <farosas@suse.de>
Acked-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Acked-by: Peter Xu <peterx@redhat.com>
Acked-by: Eric Farman <farman@linux.ibm.com>
Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Acked-by: Hyman Huang <yong.huang@smartx.com>
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Message-id: 20240102153529.486531-2-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2024-01-08 10:45:43 -05:00
Gavin Shan
b5154a2d61 target/arm: Use generic cpu_list()
No changes of the output from the following command before and
after it's applied.

[gshan@gshan q]$ ./build/qemu-system-aarch64 -cpu ?
Available CPUs:
  a64fx
  arm1026
  arm1136
  arm1136-r2
  arm1176
  arm11mpcore
  arm926
  arm946
  cortex-a15
  cortex-a35
  cortex-a53
  cortex-a55
  cortex-a57
  cortex-a7
  cortex-a710
  cortex-a72
  cortex-a76
  cortex-a8
  cortex-a9
  cortex-m0
  cortex-m3
  cortex-m33
  cortex-m4
  cortex-m55
  cortex-m7
  cortex-r5
  cortex-r52
  cortex-r5f
  max
  neoverse-n1
  neoverse-n2
  neoverse-v1
  pxa250
  pxa255
  pxa260
  pxa261
  pxa262
  pxa270-a0
  pxa270-a1
  pxa270
  pxa270-b0
  pxa270-b1
  pxa270-c0
  pxa270-c5
  sa1100
  sa1110
  ti925t

Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20231114235628.534334-9-gshan@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-01-05 16:20:14 +01:00
Jean-Philippe Brucker
6980c31dec target/arm/helper: Propagate MDCR_EL2.HPMN into PMCR_EL0.N
MDCR_EL2.HPMN allows an hypervisor to limit the number of PMU counters
available to EL1 and EL0 (to keep the others to itself). QEMU already
implements this split correctly, except for PMCR_EL0.N reads: the number
of counters read by EL1 or EL0 should be the one configured in
MDCR_EL2.HPMN.

Cc: qemu-stable@nongnu.org
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Message-id: 20231215144652.4193815-2-jean-philippe@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-12-19 18:03:32 +00:00
Philippe Mathieu-Daudé
7a3014a9a2 target/arm: Restrict DC CVAP & DC CVADP instructions to TCG accel
Hardware accelerators handle that in *hardware*.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20231130142519.28417-3-philmd@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-12-19 17:57:49 +00:00
Philippe Mathieu-Daudé
d1d119bbd7 target/arm: Restrict TCG specific helpers
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20231130142519.28417-2-philmd@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2023-12-19 17:57:48 +00:00
Peter Maydell
c36a0d577b target/arm: Don't implement *32_EL2 registers when EL1 is AArch64 only
The system registers DBGVCR32_EL2, FPEXC32_EL2, DACR32_EL2 and
IFSR32_EL2 are present only to allow an AArch64 EL2 or EL3 to read
and write the contents of an AArch32-only system register.  The
architecture requires that they are present only when EL1 can be
AArch32, but we implement them unconditionally.  This was OK when all
our CPUs supported AArch32 EL1, but we have quite a lot of CPU models
now which only support AArch64 at EL1:
 a64fx
 cortex-a76
 cortex-a710
 neoverse-n1
 neoverse-n2
 neoverse-v1

Only define these registers for CPUs which allow AArch32 EL1.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20231121144605.3980419-1-peter.maydell@linaro.org
2023-12-19 17:57:48 +00:00