Commit Graph

3087 Commits

Author SHA1 Message Date
Peter Maydell
e2ce5fcde4 target/arm: Implement HCR_EL2.TID4 traps
For FEAT_EVT, the HCR_EL2.TID4 trap allows trapping of the cache ID
registers CCSIDR_EL1, CCSIDR2_EL1, CLIDR_EL1 and CSSELR_EL1 (and
their AArch32 equivalents).  This is a subset of the registers
trapped by HCR_EL2.TID2, which includes all of these and also the
CTR_EL0 register.

Our implementation already uses a separate access function for
CTR_EL0 (ctr_el0_access()), so all of the registers currently using
access_aa64_tid2() should also be checking TID4.  Make that function
check both TID2 and TID4, and rename it appropriately.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2022-12-15 11:18:20 +00:00
Peter Maydell
2d3ce4c6f3 target/arm: Implement HCR_EL2.TICAB,TOCU traps
For FEAT_EVT, the HCR_EL2.TICAB bit allows trapping of the ICIALLUIS
and IC IALLUIS cache maintenance instructions.

The HCR_EL2.TOCU bit traps all the other cache maintenance
instructions that operate to the point of unification:
 AArch64 IC IVAU, IC IALLU, DC CVAU
 AArch32 ICIMVAU, ICIALLU, DCCMVAU

The two trap bits between them cover all of the cache maintenance
instructions which must also check the HCR_TPU flag.  Turn the old
aa64_cacheop_pou_access() function into a helper function which takes
the set of HCR_EL2 flags to check as an argument, and call it from
new access_ticab() and access_tocu() functions as appropriate for
each cache op.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2022-12-15 11:18:20 +00:00
Peter Maydell
fe3ca86c46 target/arm: Implement HCR_EL2.TTLBOS traps
For FEAT_EVT, the HCR_EL2.TTLBOS bit allows trapping on EL1
use of TLB maintenance instructions that operate on the
outer shareable domain:

TLBI VMALLE1OS, TLBI VAE1OS, TLBI ASIDE1OS,TLBI VAAE1OS,
TLBI VALE1OS, TLBI VAALE1OS, TLBI RVAE1OS, TLBI RVAAE1OS,
TLBI RVALE1OS, and TLBI RVAALE1OS.

(There are no AArch32 outer-shareable TLB maintenance ops.)

Implement the trapping.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2022-12-15 11:18:20 +00:00
Peter Maydell
0f66d223e3 target/arm: Implement HCR_EL2.TTLBIS traps
For FEAT_EVT, the HCR_EL2.TTLBIS bit allows trapping on EL1 use of
TLB maintenance instructions that operate on the inner shareable
domain:

AArch64:
 TLBI VMALLE1IS, TLBI VAE1IS, TLBI ASIDE1IS, TLBI VAAE1IS,
 TLBI VALE1IS, TLBI VAALE1IS, TLBI RVAE1IS, TLBI RVAAE1IS,
 TLBI RVALE1IS, and TLBI RVAALE1IS.

AArch32:
 TLBIALLIS, TLBIMVAIS, TLBIASIDIS, TLBIMVAAIS, TLBIMVALIS,
 and TLBIMVAALIS.

Add the trapping support.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2022-12-15 11:18:20 +00:00
Peter Maydell
d2fd931362 target/arm: Allow relevant HCR bits to be written for FEAT_EVT
FEAT_EVT adds five new bits to the HCR_EL2 register: TTLBIS, TTLBOS,
TICAB, TOCU and TID4.  These allow the guest to enable trapping of
various EL1 instructions to EL2.  In this commit, add the necessary
code to allow the guest to set these bits if the feature is present;
because the bit is always zero when the feature isn't present we
won't need to use explicit feature checks in the "trap on condition"
tests in the following commits.

Note that although full implementation of the feature (mandatory from
Armv8.5 onward) requires all five trap bits, the ID registers permit
a value indicating that only TICAB, TOCU and TID4 are implemented,
which might be the case for CPUs between Armv8.2 and Armv8.5.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2022-12-15 11:18:19 +00:00
Timofey Kutergin
94bc3b067e target/arm: Add Cortex-A55 CPU
The Cortex-A55 is one of the newer armv8.2+ CPUs; in particular
it supports the Privileged Access Never (PAN) feature. Add
a model of this CPU, so you can use a CPU type on the virt
board that models a specific real hardware CPU, rather than
having to use the QEMU-specific "max" CPU type.

Signed-off-by: Timofey Kutergin <tkutergin@gmail.com>
Message-id: 20221121150819.2782817-1-tkutergin@gmail.com
[PMM: tweaked commit message]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-12-15 11:18:19 +00:00
Markus Armbruster
fe8ac1fa49 qapi machine: Elide redundant has_FOO in generated C
The has_FOO for pointer-valued FOO are redundant, except for arrays.
They are also a nuisance to work with.  Recent commit "qapi: Start to
elide redundant has_FOO in generated C" provided the means to elide
them step by step.  This is the step for qapi/machine*.json.

Said commit explains the transformation in more detail.  The invariant
violations mentioned there do not occur here.

Cc: Eduardo Habkost <eduardo@habkost.net>
Cc: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Cc: Philippe Mathieu-Daudé <f4bug@amsat.org>
Cc: Yanan Wang <wangyanan55@huawei.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20221104160712.3005652-16-armbru@redhat.com>
2022-12-14 20:04:47 +01:00
Evgeny Ermakov
475e56b630 target/arm: Set TCGCPUOps.restore_state_to_opc for v7m
This setting got missed, breaking v7m.

Fixes: 56c6c98df8 ("target/arm: Convert to tcg_ops restore_state_to_opc")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1347
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Evgeny Ermakov <evgeny.v.ermakov@gmail.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20221129204146.550394-1-richard.henderson@linaro.org>
2022-11-29 18:15:26 -05:00
Ard Biesheuvel
15f8f4671a target/arm: Use signed quantity to represent VMSAv8-64 translation level
The LPA2 extension implements 52-bit virtual addressing for 4k and 16k
translation granules, and for the former, this means an additional level
of translation is needed. This means we start counting at -1 instead of
0 when doing a walk, and so 'level' is now a signed quantity, and should
be typed as such. So turn it from uint32_t into int32_t.

This avoids a level of -1 getting misinterpreted as being >= 3, and
terminating a page table walk prematurely with a bogus output address.

Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Philippe Mathieu-Daudé <f4bug@amsat.org>
Cc: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-11-22 16:10:25 +00:00
Peter Maydell
26ba00cf58 target/arm: Don't do two-stage lookup if stage 2 is disabled
In get_phys_addr_with_struct(), we call get_phys_addr_twostage() if
the CPU supports EL2.  However, we don't check here that stage 2 is
actually enabled.  Instead we only check that inside
get_phys_addr_twostage() to skip stage 2 translation.  This means
that even if stage 2 is disabled we still tell the stage 1 lookup to
do its page table walks via stage 2.

This works by luck for normal CPU accesses, but it breaks for debug
accesses, which are used by the disassembler and also by semihosting
file reads and writes, because the debug case takes a different code
path inside S1_ptw_translate().

This means that setups that use semihosting for file loads are broken
(a regression since 7.1, introduced in recent ptw refactoring), and
that sometimes disassembly in debug logs reports "unable to read
memory" rather than showing the guest insns.

Fix the bug by hoisting the "is stage 2 enabled?" check up to
get_phys_addr_with_struct(), so that we handle S2 disabled the same
way we do the "no EL2" case, with a simple single stage lookup.

Reported-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221121212404.1450382-1-peter.maydell@linaro.org
2022-11-22 13:18:22 +00:00
Ard Biesheuvel
312b71abce target/arm: Limit LPA2 effective output address when TCR.DS == 0
With LPA2, the effective output address size is at most 48 bits when
TCR.DS == 0. This case is currently unhandled in the page table walker,
where we happily assume LVA/64k granule when outputsize > 48 and
param.ds == 0, resulting in the wrong conversion to be used from a
page table descriptor to a physical address.

    if (outputsize > 48) {
        if (param.ds) {
            descaddr |= extract64(descriptor, 8, 2) << 50;
        } else {
            descaddr |= extract64(descriptor, 12, 4) << 48;
        }

So cap the outputsize to 48 when TCR.DS is cleared, as per the
architecture.

Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Philippe Mathieu-Daudé <f4bug@amsat.org>
Cc: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221116170316.259695-1-ardb@kernel.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-11-21 11:46:46 +00:00
Richard Henderson
cead7fa4c0 target/arm: Two fixes for secure ptw
Reversed the sense of non-secure in get_phys_addr_lpae,
and failed to initialize attrs.secure for ARMMMUIdx_Phys_S.

Fixes: 48da29e4 ("target/arm: Add ptw_idx to S1Translate")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1293
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-11-04 10:58:58 +00:00
Ake Koomsin
638d5dbd78 target/arm: Honor HCR_E2H and HCR_TGE in ats_write64()
We need to check HCR_E2H and HCR_TGE to select the right MMU index for
the correct translation regime.

To check for EL2&0 translation regime:
- For S1E0*, S1E1* and S12E* ops, check both HCR_E2H and HCR_TGE
- For S1E2* ops, check only HCR_E2H

Signed-off-by: Ake Koomsin <ake@igel.co.jp>
Message-id: 20221101064250.12444-1-ake@igel.co.jp
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-11-04 10:58:58 +00:00
Richard Henderson
302ad91209 target/arm: Copy the entire vector in DO_ZIP
With odd_ofs set, we weren't copying enough data.

Fixes: 09eb6d7025 ("target/arm: Move sve zip high_ofs into simd_data")
Reported-by: Idan Horowitz <idan.horowitz@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20221031054144.3574-1-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-11-04 10:58:58 +00:00
Timofey Kutergin
6f2d9d7441 target/arm: Fix Privileged Access Never (PAN) for aarch32
When we implemented the PAN support we theoretically wanted
to support it for both AArch32 and AArch64, but in practice
several bugs made it essentially unusable with an AArch32
guest. Fix all those problems:

    - Use CPSR.PAN to check for PAN state in aarch32 mode
    - throw permission fault during address translation when PAN is
      enabled and kernel tries to access user acessible page
    - ignore SCTLR_XP bit for armv7 and armv8 (conflicts with SCTLR_SPAN).

Signed-off-by: Timofey Kutergin <tkutergin@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221027112619.2205229-1-tkutergin@gmail.com
[PMM: tweak commit message]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-11-04 10:58:58 +00:00
Peter Maydell
4870f38b0b target/arm: Make TLBIOS and TLBIRANGE ops trap on HCR_EL2.TTLB
The HCR_EL2.TTLB bit is supposed to trap all EL1 execution of TLB
maintenance instructions.  However we have added new TLB insns for
FEAT_TLBIOS and FEAT_TLBIRANGE, and forgot to set their accessfn to
access_ttlb.  Add the missing accessfns.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2022-11-04 10:58:58 +00:00
Richard Henderson
3d419a4dd2 accel/tcg: Remove will_exit argument from cpu_restore_state
The value passed is always true, and if the target's
synchronize_from_tb hook is non-trivial, not exiting
may be erroneous.

Reviewed-by: Claudio Fontana <cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-11-01 08:31:41 +11:00
Richard Henderson
c8d6c286ab target/arm: Use the max page size in a 2-stage ptw
We had only been reporting the stage2 page size.  This causes
problems if stage1 is using a larger page size (16k, 2M, etc),
but stage2 is using a smaller page size, because cputlb does
not set large_page_{addr,mask} properly.

Fix by using the max of the two page sizes.

Reported-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221024051851.3074715-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27 11:34:31 +01:00
Richard Henderson
65c123fdf5 target/arm: Implement FEAT_HAFDBS, dirty bit portion
Perform the atomic update for hardware management of the dirty bit.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221024051851.3074715-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27 11:34:31 +01:00
Richard Henderson
71943a1e90 target/arm: Implement FEAT_HAFDBS, access flag portion
Perform the atomic update for hardware management of the access flag.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221024051851.3074715-13-richard.henderson@linaro.org
[PMM: Fix accidental PROT_WRITE to PAGE_WRITE; add missing
 main-loop.h include]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27 10:27:24 +01:00
Richard Henderson
34a57faeab target/arm: Tidy merging of attributes from descriptor and table
Replace some gotos with some nested if statements.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20221024051851.3074715-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27 10:27:24 +01:00
Richard Henderson
0e8df0fe24 target/arm: Consider GP an attribute in get_phys_addr_lpae
Both GP and DBM are in the upper attribute block.
Extend the computation of attrs to include them,
then simplify the setting of guarded.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20221024051851.3074715-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27 10:27:24 +01:00
Richard Henderson
4566609176 target/arm: Don't shift attrs in get_phys_addr_lpae
Leave the upper and lower attributes in the place they originate
from in the descriptor.  Shifting them around is confusing, since
one cannot read the bit numbers out of the manual.  Also, new
attributes have been added which would alter the shifts.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20221024051851.3074715-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27 10:27:24 +01:00
Richard Henderson
27c1b81d61 target/arm: Fix fault reporting in get_phys_addr_lpae
Always overriding fi->type was incorrect, as we would not properly
propagate the fault type from S1_ptw_translate, or arm_ldq_ptw.
Simplify things by providing a new label for a translation fault.
For other faults, store into fi directly.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20221024051851.3074715-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27 10:27:24 +01:00
Richard Henderson
fe4ddc151b target/arm: Remove loop from get_phys_addr_lpae
The unconditional loop was used both to iterate over levels
and to control parsing of attributes.  Use an explicit goto
in both cases.

While this appears less clean for iterating over levels, we
will need to jump back into the middle of this loop for
atomic updates, which is even uglier.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221024051851.3074715-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27 10:27:23 +01:00
Richard Henderson
f0a398a249 target/arm: Add ARMFault_UnsuppAtomicUpdate
This fault type is to be used with FEAT_HAFDBS when
the guest enables hw updates, but places the tables
in memory where atomic updates are unsupported.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20221024051851.3074715-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27 10:27:23 +01:00
Richard Henderson
93e5b3a6f9 target/arm: Move S1_ptw_translate outside arm_ld[lq]_ptw
Separate S1 translation from the actual lookup.
Will enable lpae hardware updates.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221024051851.3074715-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27 10:27:23 +01:00
Richard Henderson
8973922783 target/arm: Extract HA and HD in aa64_va_parameters
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20221024051851.3074715-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27 10:27:23 +01:00
Richard Henderson
980a68925c target/arm: Add isar predicates for FEAT_HAFDBS
The MMFR1 field may indicate support for hardware update of
access flag alone, or access flag and dirty bit.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221024051851.3074715-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27 10:27:23 +01:00
Richard Henderson
48da29e485 target/arm: Add ptw_idx to S1Translate
Hoist the computation of the mmu_idx for the ptw up to
get_phys_addr_with_struct and get_phys_addr_twostage.
This removes the duplicate check for stage2 disabled
from the middle of the walk, performing it only once.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20221024051851.3074715-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27 10:27:23 +01:00
Richard Henderson
edc05dd43a target/arm: Introduce regime_is_stage2
Reduce the amount of typing required for this check.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221024051851.3074715-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27 10:27:23 +01:00
Ake Koomsin
c939a7c7b9 target/arm: honor HCR_E2H and HCR_TGE in arm_excp_unmasked()
An exception targeting EL2 from lower EL is actually maskable when
HCR_E2H and HCR_TGE are both set. This applies to both secure and
non-secure Security state.

We can remove the conditions that try to suppress masking of
interrupts when we are Secure and the exception targets EL2 and
Secure EL2 is disabled.  This is OK because in that situation
arm_phys_excp_target_el() will never return 2 as the target EL.  The
'not if secure' check in this function was originally written before
arm_hcr_el2_eff(), and back then the target EL returned by
arm_phys_excp_target_el() could be 2 even if we were in Secure
EL0/EL1; but it is no longer needed.

Signed-off-by: Ake Koomsin <ake@igel.co.jp>
Message-id: 20221017092432.546881-1-ake@igel.co.jp
[PMM: Add commit message paragraph explaining why it's OK to
 remove the checks on secure and SCR_EEL2]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-27 10:27:23 +01:00
Peter Maydell
e4c93e44ab target/arm: Implement FEAT_E0PD
FEAT_E0PD adds new bits E0PD0 and E0PD1 to TCR_EL1, which allow the
OS to forbid EL0 access to half of the address space.  Since this is
an EL0-specific variation on the existing TCR_ELx.{EPD0,EPD1}, we can
implement it entirely in aa64_va_parameters().

This requires moving the existing regime_is_user() to internals.h
so that the code in helper.c can get at it.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221021160131.3531787-1-peter.maydell@linaro.org
2022-10-27 10:27:23 +01:00
Richard Henderson
56c6c98df8 target/arm: Convert to tcg_ops restore_state_to_opc
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-10-26 11:11:28 +10:00
Richard Henderson
8269c01417 accel/tcg: Simplify page_get/alloc_target_data
Since the only user, Arm MTE, always requires allocation,
merge the get and alloc functions to always produce a
non-null result.  Also assume that the user has already
checked page validity.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-10-26 11:11:28 +10:00
Richard Henderson
50d4c8c1d4 accel/tcg: Make page_alloc_target_data allocation constant
Use a constant target data allocation size for all pages.
This will be necessary to reduce overhead of page tracking.
Since TARGET_PAGE_DATA_SIZE is now required, we can use this
to omit data tracking for targets that don't require it.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-10-26 11:11:28 +10:00
Richard Henderson
abb80995d7 target/arm: Enable TARGET_TB_PCREL
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221020030641.2066807-10-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20 11:28:29 +01:00
Richard Henderson
35dbeb8177 target/arm: Introduce gen_pc_plus_diff for aarch32
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221020030641.2066807-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20 11:27:52 +01:00
Richard Henderson
19f6b76baa target/arm: Introduce gen_pc_plus_diff for aarch64
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221020030641.2066807-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20 11:27:52 +01:00
Richard Henderson
bb0356170a target/arm: Change gen_jmp* to work on displacements
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221020030641.2066807-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20 11:27:52 +01:00
Richard Henderson
b4f8d987f6 target/arm: Remove gen_exception_internal_insn pc argument
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
Since we always pass dc->pc_curr, fold the arithmetic to zero displacement.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221020030641.2066807-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20 11:27:52 +01:00
Richard Henderson
55086e628f target/arm: Change gen_exception_insn* to work on displacements
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221020030641.2066807-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20 11:27:52 +01:00
Richard Henderson
c44c8b8b99 target/arm: Change gen_*set_pc_im to gen_*update_pc
In preparation for TARGET_TB_PCREL, reduce reliance on
absolute values by passing in pc difference.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221020030641.2066807-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20 11:27:52 +01:00
Richard Henderson
168122419e target/arm: Change gen_goto_tb to work on displacements
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221020030641.2066807-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20 11:27:52 +01:00
Richard Henderson
8df8727973 target/arm: Introduce curr_insn_len
A simple helper to retrieve the length of the current insn.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221020030641.2066807-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20 11:27:52 +01:00
Richard Henderson
6b72c5424a target/arm: Use bool consistently for get_phys_addr subroutines
The return type of the functions is already bool, but in a few
instances we used an integer type with the return statement.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221011031911.2408754-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20 11:27:49 +01:00
Richard Henderson
3f5a74c543 target/arm: Split out get_phys_addr_twostage
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221011031911.2408754-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20 11:27:49 +01:00
Richard Henderson
f3639a64f6 target/arm: Use softmmu tlbs for page table walking
So far, limit the change to S1_ptw_translate, arm_ldl_ptw, and
arm_ldq_ptw.  Use probe_access_full to find the host address,
and if so use a host load.  If the probe fails, we've got our
fault info already.  On the off chance that page tables are not
in RAM, continue to use the address_space_ld* functions.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221011031911.2408754-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20 11:27:49 +01:00
Richard Henderson
4e7a2c9860 target/arm: Move be test for regime into S1TranslateResult
Hoist this test out of arm_ld[lq]_ptw into S1_ptw_translate.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221011031911.2408754-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20 11:27:49 +01:00
Richard Henderson
4a35855682 target/arm: Plumb debug into S1Translate
Before using softmmu page tables for the ptw, plumb down
a debug parameter so that we can query page table entries
from gdbstub without modifying cpu state.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221011031911.2408754-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20 11:27:49 +01:00
Richard Henderson
6d2654ffac target/arm: Split out S1Translate type
Consolidate most of the inputs and outputs of S1_ptw_translate
into a single structure.  Plumb this through arm_ld*_ptw from
the controlling get_phys_addr_* routine.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221011031911.2408754-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20 11:27:49 +01:00
Richard Henderson
00b20ee42e target/arm: Restrict tlb flush from vttbr_write to vmid change
Compare only the VMID field when considering whether we need to flush.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221011031911.2408754-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20 11:27:49 +01:00
Richard Henderson
575a94af3c target/arm: Move ARMMMUIdx_Stage2 to a real tlb mmu_idx
We had been marking this ARM_MMU_IDX_NOTLB, move it to a real tlb.
Flush the tlb when invalidating stage 1+2 translations.  Re-use
alle1_tlbmask() for other instances of EL1&0 + Stage2.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221011031911.2408754-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20 11:27:49 +01:00
Richard Henderson
a1ce3084c5 target/arm: Add ARMMMUIdx_Phys_{S,NS}
Not yet used, but add mmu indexes for 1-1 mapping
to physical addresses.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221011031911.2408754-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20 11:27:49 +01:00
Richard Henderson
937f224559 target/arm: Use probe_access_full for BTI
Add a field to TARGET_PAGE_ENTRY_EXTRA to hold the guarded bit.
In is_guarded_page, use probe_access_full instead of just guessing
that the tlb entry is still present.  Also handles the FIXME about
executing from device memory.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221011031911.2408754-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20 11:27:49 +01:00
Richard Henderson
b8967ddf39 target/arm: Use probe_access_full for MTE
The CPUTLBEntryFull structure now stores the original pte attributes, as
well as the physical address.  Therefore, we no longer need a separate
bit in MemTxAttrs, nor do we need to walk the tree of memory regions.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221011031911.2408754-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20 11:27:49 +01:00
Richard Henderson
24d18d5d7e target/arm: Enable TARGET_PAGE_ENTRY_EXTRA
Copy attrs and shareability, into the TLB.  This will eventually
be used by S1_ptw_translate to report stage1 translation failures,
and by do_ats_write to fill in PAR_EL1.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221011031911.2408754-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20 11:27:49 +01:00
Alex Bennée
947692e708 target/arm: update the cortex-a15 MIDR to latest rev
QEMU doesn't model micro-architectural details which includes most
chip errata. The ARM_ERRATA_798181 work around in the Linux
kernel (see erratum_a15_798181_init) currently detects QEMU's
cortex-a15 as broken and triggers additional expensive TLB flushes as
a result.

Change the MIDR to report what the latest silicon would (r4p0). We
explicitly set the IMPDEF revidr bits to 0 because we don't need to
set anything other than the silicon revision to indicate these flushes
are not needed. This cuts about 5s from my Debian kernel boot with the
latest 6.0rc1 kernel (29s->24s).

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Anders Roxell <anders.roxell@linaro.org>
Message-id: 20221010153225.506394-1-alex.bennee@linaro.org
Cc: Arnd Bergmann <arnd@linaro.org>
Cc: Anders Roxell <anders.roxell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Anders Roxell <anders.roxell@linaro.org>
Message-Id: <20220906172257.2776521-1-alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20 11:27:49 +01:00
Stefan Hajnoczi
bb76f8e275 * scsi-disk: support setting CD-ROM block size via device options
* target/i386: Implement MSR_CORE_THREAD_COUNT MSR
 * target/i386: notify VM exit support
 * target/i386: PC-relative translation block support
 * target/i386: support for XSAVE state in signal frames (linux-user)
 -----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmNFKP4UHHBib256aW5p
 QHJlZGhhdC5jb20ACgkQv/vSX3jHroNJnwgAgCcOOxmY4Qem0Gd1L+SJKpEtGMOd
 4LY7443vT36pMpvqFNSfp5GBjDT1MgTD8BIY28miLMq959LT89LyM9g/H7IKOT82
 uyCsW3jW+6F19EZVkNvzTt+3USn/kaHn50zA4Ss9kvdNZr31b2LYqtglVCznfZwH
 oI1rDhvsXubq8oWvwkqH7IwduK8mw+EB5Yz7AjYQ6eiYjenTrQBObpwQNbb4rlUf
 oRm8dk/YJ2gfI2HQkoznGEbgpngy2tIU1vHNEpIk5NpwXxrulOyui3+sWaG4pH8f
 oAOrSDC23M5A6jBJJAzDJ1q6M677U/kwJypyGQ7IyvyhECXE3tR+lHX1eA==
 =tqeJ
 -----END PGP SIGNATURE-----

Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging

* scsi-disk: support setting CD-ROM block size via device options
* target/i386: Implement MSR_CORE_THREAD_COUNT MSR
* target/i386: notify VM exit support
* target/i386: PC-relative translation block support
* target/i386: support for XSAVE state in signal frames (linux-user)

# -----BEGIN PGP SIGNATURE-----
#
# iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmNFKP4UHHBib256aW5p
# QHJlZGhhdC5jb20ACgkQv/vSX3jHroNJnwgAgCcOOxmY4Qem0Gd1L+SJKpEtGMOd
# 4LY7443vT36pMpvqFNSfp5GBjDT1MgTD8BIY28miLMq959LT89LyM9g/H7IKOT82
# uyCsW3jW+6F19EZVkNvzTt+3USn/kaHn50zA4Ss9kvdNZr31b2LYqtglVCznfZwH
# oI1rDhvsXubq8oWvwkqH7IwduK8mw+EB5Yz7AjYQ6eiYjenTrQBObpwQNbb4rlUf
# oRm8dk/YJ2gfI2HQkoznGEbgpngy2tIU1vHNEpIk5NpwXxrulOyui3+sWaG4pH8f
# oAOrSDC23M5A6jBJJAzDJ1q6M677U/kwJypyGQ7IyvyhECXE3tR+lHX1eA==
# =tqeJ
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 11 Oct 2022 04:27:42 EDT
# gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg:                issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
#      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83

* tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (37 commits)
  linux-user: i386/signal: support XSAVE/XRSTOR for signal frame fpstate
  linux-user: i386/signal: support FXSAVE fpstate on 32-bit emulation
  linux-user: i386/signal: move fpstate at the end of the 32-bit frames
  KVM: x86: Implement MSR_CORE_THREAD_COUNT MSR
  i386: kvm: Add support for MSR filtering
  x86: Implement MSR_CORE_THREAD_COUNT MSR
  target/i386: Enable TARGET_TB_PCREL
  target/i386: Inline gen_jmp_im
  target/i386: Add cpu_eip
  target/i386: Create eip_cur_tl
  target/i386: Merge gen_jmp_tb and gen_goto_tb into gen_jmp_rel
  target/i386: Remove MemOp argument to gen_op_j*_ecx
  target/i386: Use gen_jmp_rel for DISAS_TOO_MANY
  target/i386: Use gen_jmp_rel for gen_jcc
  target/i386: Use gen_jmp_rel for loop, repz, jecxz insns
  target/i386: Create gen_jmp_rel
  target/i386: Use DISAS_TOO_MANY to exit after gen_io_start
  target/i386: Create eip_next_*
  target/i386: Truncate values for lcall_real to i32
  target/i386: Introduce DISAS_JUMP
  ...

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2022-10-13 13:55:03 -04:00
Stefan Hajnoczi
cdda364e1d target-arm queue:
* Retry KVM_CREATE_VM call if it fails EINTR
  * allow setting SCR_EL3.EnTP2 when FEAT_SME is implemented
  * docs/nuvoton: Update URL for images
  * refactoring of page table walk code
  * hw/arm/boot: set CPTR_EL3.ESM and SCR_EL3.EnTP2 when booting Linux with EL3
  * Don't allow guest to use unimplemented granule sizes
  * Report FEAT_GTG support
 -----BEGIN PGP SIGNATURE-----
 
 iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmNEK54ZHHBldGVyLm1h
 eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3kzHD/9StYmulAf0iwe1ZNp6NavK
 CioOgZi6XyZl4rS2DrCf6/IO5XRFJP68byZd4Po554r2jcPc149yTuQAn4wb7d5e
 kejMZRQeWsXdxschhoVzDp9fgfzyZBn9X+gbdEZFFPWzOHMyWuu4cTok0dAKQvQY
 tZDLGmKeTv4MRUFJCri0310Sq0T0v/nAX/AyFtpvIr2SBx7DVCWYY02s5R4Yy5+M
 ntDWb0j12r78/bPwI1ll+g19JXUV5Tfh9AsbcYjKv45kdftz/Xc8fBiSiEpxyMrF
 mnVrr3kesZHOYAnOr2K1MnwsF0vU41kRg7kMRqSnu7pZXlI/8tmRyXoPR3c2aDbW
 Q5HWtsA48j2h0CJ0ESzl5SQnl3TSPa94m/HmpRSBFrYkU727QgnWDhUmBb4n54xs
 9iBJDhcKGZLq68CB2+j6ENdRNTndolr14OwwEns0lbkoiCKUOQY3AigtZJQGRBGM
 J5r3ED7jfTWpvP6vpp5X484fK6KVprSMxsRFDkmiwhbb3J+WtKLxbSlgsWIrkZ7s
 +JgTGfGB8sD9hJVuFZYyPQb/XWP8Bb8jfgsLsTu1vW9Xs1ASrLimFYdRO3hhwSg3
 c5yubz6Vu9GB/JYh7hGprlMD5Yv48AA3if70hOu2d4P8A4OitavT7o+4Thwqjhds
 cSV1RsBJ8ha6L3CziZaKrQ==
 =s+1f
 -----END PGP SIGNATURE-----

Merge tag 'pull-target-arm-20221010' of https://git.linaro.org/people/pmaydell/qemu-arm into staging

target-arm queue:
 * Retry KVM_CREATE_VM call if it fails EINTR
 * allow setting SCR_EL3.EnTP2 when FEAT_SME is implemented
 * docs/nuvoton: Update URL for images
 * refactoring of page table walk code
 * hw/arm/boot: set CPTR_EL3.ESM and SCR_EL3.EnTP2 when booting Linux with EL3
 * Don't allow guest to use unimplemented granule sizes
 * Report FEAT_GTG support

# -----BEGIN PGP SIGNATURE-----
#
# iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmNEK54ZHHBldGVyLm1h
# eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3kzHD/9StYmulAf0iwe1ZNp6NavK
# CioOgZi6XyZl4rS2DrCf6/IO5XRFJP68byZd4Po554r2jcPc149yTuQAn4wb7d5e
# kejMZRQeWsXdxschhoVzDp9fgfzyZBn9X+gbdEZFFPWzOHMyWuu4cTok0dAKQvQY
# tZDLGmKeTv4MRUFJCri0310Sq0T0v/nAX/AyFtpvIr2SBx7DVCWYY02s5R4Yy5+M
# ntDWb0j12r78/bPwI1ll+g19JXUV5Tfh9AsbcYjKv45kdftz/Xc8fBiSiEpxyMrF
# mnVrr3kesZHOYAnOr2K1MnwsF0vU41kRg7kMRqSnu7pZXlI/8tmRyXoPR3c2aDbW
# Q5HWtsA48j2h0CJ0ESzl5SQnl3TSPa94m/HmpRSBFrYkU727QgnWDhUmBb4n54xs
# 9iBJDhcKGZLq68CB2+j6ENdRNTndolr14OwwEns0lbkoiCKUOQY3AigtZJQGRBGM
# J5r3ED7jfTWpvP6vpp5X484fK6KVprSMxsRFDkmiwhbb3J+WtKLxbSlgsWIrkZ7s
# +JgTGfGB8sD9hJVuFZYyPQb/XWP8Bb8jfgsLsTu1vW9Xs1ASrLimFYdRO3hhwSg3
# c5yubz6Vu9GB/JYh7hGprlMD5Yv48AA3if70hOu2d4P8A4OitavT7o+4Thwqjhds
# cSV1RsBJ8ha6L3CziZaKrQ==
# =s+1f
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 10 Oct 2022 10:26:38 EDT
# gpg:                using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg:                issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [full]
# gpg:                 aka "Peter Maydell <pmaydell@gmail.com>" [full]
# gpg:                 aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [full]
# gpg:                 aka "Peter Maydell <peter@archaic.org.uk>" [unknown]
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83  15CF 3C25 25ED 1436 0CDE

* tag 'pull-target-arm-20221010' of https://git.linaro.org/people/pmaydell/qemu-arm: (28 commits)
  docs/system/arm/emulation.rst: Report FEAT_GTG support
  target/arm: Use ARMGranuleSize in ARMVAParameters
  target/arm: Don't allow guest to use unimplemented granule sizes
  hw/arm/boot: set CPTR_EL3.ESM and SCR_EL3.EnTP2 when booting Linux with EL3
  target/arm: Use tlb_set_page_full
  target/arm: Fix cacheattr in get_phys_addr_disabled
  target/arm: Split out get_phys_addr_disabled
  target/arm: Fix ATS12NSO* from S PL1
  target/arm: Pass HCR to attribute subroutines.
  target/arm: Remove env argument from combined_attrs_fwb
  target/arm: Hoist read of *is_secure in S1_ptw_translate
  target/arm: Introduce arm_hcr_el2_eff_secstate
  target/arm: Drop secure check for HCR.TGE vs SCTLR_EL1.M
  target/arm: Reorg regime_translation_disabled
  target/arm: Fold secure and non-secure a-profile mmu indexes
  target/arm: Add is_secure parameter to do_ats_write
  target/arm: Merge regime_is_secure into get_phys_addr
  target/arm: Add TBFLAG_M32.SECURE
  target/arm: Add is_secure parameter to v7m_read_half_insn
  target/arm: Split out get_phys_addr_with_secure
  ...

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2022-10-12 15:59:10 -04:00
Peter Maydell
3c003f7029 target/arm: Use ARMGranuleSize in ARMVAParameters
Now we have an enum for the granule size, use it in the
ARMVAParameters struct instead of the using16k/using64k bools.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221003162315.2833797-3-peter.maydell@linaro.org
2022-10-10 14:52:25 +01:00
Peter Maydell
104f703d93 target/arm: Don't allow guest to use unimplemented granule sizes
Arm CPUs support some subset of the granule (page) sizes 4K, 16K and
64K.  The guest selects the one it wants using bits in the TCR_ELx
registers.  If it tries to program these registers with a value that
is either reserved or which requests a size that the CPU does not
implement, the architecture requires that the CPU behaves as if the
field was programmed to some size that has been implemented.
Currently we don't implement this, and instead let the guest use any
granule size, even if the CPU ID register fields say it isn't
present.

Make aa64_va_parameters() check against the supported granule size
and force use of a different one if it is not implemented.

(A subsequent commit will make ARMVAParameters use the new enum
rather than the current pair of using16k/using64k bools.)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221003162315.2833797-2-peter.maydell@linaro.org
2022-10-10 14:52:25 +01:00
Richard Henderson
7fa7ea8f48 target/arm: Use tlb_set_page_full
Adjust GetPhysAddrResult to fill in CPUTLBEntryFull,
so that it may be passed directly to tlb_set_page_full.

The change is large, but mostly mechanical.  The major
non-mechanical change is page_size -> lg_page_size.
Most of the time this is obvious, and is related to
TARGET_PAGE_BITS.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221001162318.153420-21-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10 14:52:25 +01:00
Richard Henderson
5b74f9b4ed target/arm: Fix cacheattr in get_phys_addr_disabled
Do not apply memattr or shareability for Stage2 translations.
Make sure to apply HCR_{DC,DCT} only to Regime_EL10, per the
pseudocode in AArch64.S1DisabledOutput.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221001162318.153420-20-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10 14:52:25 +01:00
Richard Henderson
448e42fdc1 target/arm: Split out get_phys_addr_disabled
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-19-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10 14:52:25 +01:00
Richard Henderson
2189c79858 target/arm: Fix ATS12NSO* from S PL1
Use arm_hcr_el2_eff_secstate instead of arm_hcr_el2_eff, so
that we use is_secure instead of the current security state.
These AT* operations have been broken since arm_hcr_el2_eff
gained a check for "el2 enabled" for Secure EL2.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-18-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10 14:52:25 +01:00
Richard Henderson
ac76c2e508 target/arm: Pass HCR to attribute subroutines.
These subroutines did not need ENV for anything except
retrieving the effective value of HCR anyway.

We have computed the effective value of HCR in the callers,
and this will be especially important for interpreting HCR
in a non-current security state.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-17-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10 14:52:25 +01:00
Richard Henderson
72cef09caa target/arm: Remove env argument from combined_attrs_fwb
This value is unused.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221001162318.153420-16-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10 14:52:25 +01:00
Richard Henderson
ab1f78859d target/arm: Hoist read of *is_secure in S1_ptw_translate
Rename the argument to is_secure_ptr, and introduce a
local variable is_secure with the value.  We only write
back to the pointer toward the end of the function.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10 14:52:25 +01:00
Richard Henderson
b74c04431d target/arm: Introduce arm_hcr_el2_eff_secstate
For page walking, we may require HCR for a security state
that is not "current".

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10 14:52:25 +01:00
Richard Henderson
fdf1293339 target/arm: Drop secure check for HCR.TGE vs SCTLR_EL1.M
The effect of TGE does not only apply to non-secure state,
now that Secure EL2 exists.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10 14:52:24 +01:00
Richard Henderson
3b2af99313 target/arm: Reorg regime_translation_disabled
Use a switch on mmu_idx for the a-profile indexes, instead of
three different if's vs regime_el and arm_mmu_idx_is_stage1_of_2.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10 14:52:24 +01:00
Richard Henderson
d902ae7558 target/arm: Fold secure and non-secure a-profile mmu indexes
For a-profile aarch64, which does not bank system registers, it takes
quite a lot of code to switch between security states.  In the process,
registers such as TCR_EL{1,2} must be swapped, which in itself requires
the flushing of softmmu tlbs.  Therefore it doesn't buy us anything to
separate tlbs by security state.

Retain the distinction between Stage2 and Stage2_S.

This will be important as we implement FEAT_RME, and do not wish to
add a third set of mmu indexes for Realm state.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10 14:52:24 +01:00
Richard Henderson
7aee3cb956 target/arm: Add is_secure parameter to do_ats_write
Use get_phys_addr_with_secure directly.  For a-profile, this is the
one place where the value of is_secure may not equal arm_is_secure(env).

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10 14:52:24 +01:00
Richard Henderson
03bea66e7f target/arm: Merge regime_is_secure into get_phys_addr
This is the last use of regime_is_secure; remove it
entirely before changing the layout of ARMMMUIdx.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10 14:52:24 +01:00
Richard Henderson
a393dee019 target/arm: Add TBFLAG_M32.SECURE
Remove the use of regime_is_secure from arm_tr_init_disas_context.
Instead, provide the value of v8m_secure directly from tb_flags.
Rather than use regime_is_secure, use the env->v7m.secure directly,
as per arm_mmu_idx_el.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10 14:52:24 +01:00
Richard Henderson
9cd5f1710e target/arm: Add is_secure parameter to v7m_read_half_insn
Remove the use of regime_is_secure from v7m_read_half_insn, using
the new parameter instead.

As it happens, both callers pass true, propagated from the argument
to arm_v7m_mmu_idx_for_secstate which created the mmu_idx argument,
but that is a detail of v7m_handle_execute_nsc we need not expose
to the callee.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10 14:52:24 +01:00
Richard Henderson
def8aa5b80 target/arm: Split out get_phys_addr_with_secure
Retain the existing get_phys_addr interface using the security
state derived from mmu_idx.  Move the kerneldoc comments to the
header file where they belong.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10 14:52:24 +01:00
Richard Henderson
7e80c0a4ff target/arm: Add is_secure parameter to regime_translation_disabled
Remove the use of regime_is_secure from regime_translation_disabled,
using the new parameter instead.

This fixes a bug in S1_ptw_translate and get_phys_addr where we had
passed ARMMMUIdx_Stage2 and not ARMMMUIdx_Stage2_S to determine if
Stage2 is disabled, affecting FEAT_SEL2.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10 14:52:24 +01:00
Richard Henderson
bf25b7b079 target/arm: Fix S2 disabled check in S1_ptw_translate
Pass the correct stage2 mmu_idx to regime_translation_disabled,
which we computed afterward.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221001162318.153420-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10 14:52:24 +01:00
Richard Henderson
c23f08a56c target/arm: Add is_secure parameter to get_phys_addr_lpae
Remove the use of regime_is_secure from get_phys_addr_lpae,
using the new parameter instead.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10 14:52:24 +01:00
Richard Henderson
9b5ba97ac7 target/arm: Make the final stage1+2 write to secure be unconditional
While the stage2 call to get_phys_addr_lpae should never set
attrs.secure when given a non-secure input, it's just as easy
to make the final update to attrs.secure be unconditional and
false in the case of non-secure input.

Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221007152159.1414065-1-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10 14:52:24 +01:00
Richard Henderson
c7637be307 target/arm: Split s2walk_secure from ipa_secure in get_phys_addr
The starting security state comes with the translation regime,
not the current state of arm_is_secure_below_el3().

Create a new local variable, s2walk_secure, which does not need
to be written back to result->attrs.secure -- we compute that
value later, after the S2 walk is complete.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221001162318.153420-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10 14:52:24 +01:00
Jerome Forissier
06f2adccfa target/arm: allow setting SCR_EL3.EnTP2 when FEAT_SME is implemented
Updates write_scr() to allow setting SCR_EL3.EnTP2 when FEAT_SME is
implemented. SCR_EL3 being a 64-bit register, valid_mask is changed
to uint64_t and the SCR_* constants in target/arm/cpu.h are extended
to 64-bit so that masking and bitwise not (~) behave as expected.

This enables booting Linux with Trusted Firmware-A at EL3 with
"-M virt,secure=on -cpu max".

Cc: qemu-stable@nongnu.org
Fixes: 78cb977666 ("target/arm: Enable SME for -cpu max")
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221004072354.27037-1-jerome.forissier@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-10 14:52:23 +01:00
Peter Maydell
bbde13cd14 target/arm/kvm: Retry KVM_CREATE_VM call if it fails EINTR
Occasionally the KVM_CREATE_VM ioctl can return EINTR, even though
there is no pending signal to be taken. In commit 94ccff1338
we added a retry-on-EINTR loop to the KVM_CREATE_VM call in the
generic KVM code. Adopt the same approach for the use of the
ioctl in the Arm-specific KVM code (where we use it to create a
scratch VM for probing for various things).

For more information, see the mailing list thread:
https://lore.kernel.org/qemu-devel/8735e0s1zw.wl-maz@kernel.org/

Reported-by: Vitaly Chikunov <vt@altlinux.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Vitaly Chikunov <vt@altlinux.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Message-id: 20220930113824.1933293-1-peter.maydell@linaro.org
2022-10-10 14:52:23 +01:00
Paolo Bonzini
3dba0a335c kvm: allow target-specific accelerator properties
Several hypervisor capabilities in KVM are target-specific.  When exposed
to QEMU users as accelerator properties (i.e. -accel kvm,prop=value), they
should not be available for all targets.

Add a hook for targets to add their own properties to -accel kvm, for
now no such property is defined.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20220929072014.20705-3-chenyi.qiang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-10 09:23:16 +02:00
Janosch Frank
1af0006ab9 dump: Replace opaque DumpState pointer with a typed one
It's always better to convey the type of a pointer if at all
possible. So let's add the DumpState typedef to typedefs.h and move
the dump note functions from the opaque pointers to DumpState
pointers.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
CC: Peter Maydell <peter.maydell@linaro.org>
CC: Cédric Le Goater <clg@kaod.org>
CC: Daniel Henrique Barboza <danielhb413@gmail.com>
CC: David Gibson <david@gibson.dropbear.id.au>
CC: Greg Kurz <groug@kaod.org>
CC: Palmer Dabbelt <palmer@dabbelt.com>
CC: Alistair Francis <alistair.francis@wdc.com>
CC: Bin Meng <bin.meng@windriver.com>
CC: Cornelia Huck <cohuck@redhat.com>
CC: Thomas Huth <thuth@redhat.com>
CC: Richard Henderson <richard.henderson@linaro.org>
CC: David Hildenbrand <david@redhat.com>
Acked-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20220811121111.9878-2-frankja@linux.ibm.com>
2022-10-06 19:30:43 +04:00
Richard Henderson
fbf59aad17 accel/tcg: Introduce tb_pc and log_pc
The availability of tb->pc will shortly be conditional.
Introduce accessor functions to minimize ifdefs.

Pass around a known pc to places like tcg_gen_code,
where the caller must already have the value.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-10-04 12:13:12 -07:00
Richard Henderson
e4fdf9df5b hw/core: Add CPUClass.get_pc
Populate this new method for all targets.  Always match
the result that would be given by cpu_get_tb_cpu_state,
as we will want these values to correspond in the logs.

Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> (target/sparc)
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
Cc: Eduardo Habkost <eduardo@habkost.net> (supporter:Machine core)
Cc: Marcel Apfelbaum <marcel.apfelbaum@gmail.com> (supporter:Machine core)
Cc: "Philippe Mathieu-Daudé" <f4bug@amsat.org> (reviewer:Machine core)
Cc: Yanan Wang <wangyanan55@huawei.com> (reviewer:Machine core)
Cc: Michael Rolnik <mrolnik@gmail.com> (maintainer:AVR TCG CPUs)
Cc: "Edgar E. Iglesias" <edgar.iglesias@gmail.com> (maintainer:CRIS TCG CPUs)
Cc: Taylor Simpson <tsimpson@quicinc.com> (supporter:Hexagon TCG CPUs)
Cc: Song Gao <gaosong@loongson.cn> (maintainer:LoongArch TCG CPUs)
Cc: Xiaojuan Yang <yangxiaojuan@loongson.cn> (maintainer:LoongArch TCG CPUs)
Cc: Laurent Vivier <laurent@vivier.eu> (maintainer:M68K TCG CPUs)
Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> (reviewer:MIPS TCG CPUs)
Cc: Aleksandar Rikalo <aleksandar.rikalo@syrmia.com> (reviewer:MIPS TCG CPUs)
Cc: Chris Wulff <crwulff@gmail.com> (maintainer:NiosII TCG CPUs)
Cc: Marek Vasut <marex@denx.de> (maintainer:NiosII TCG CPUs)
Cc: Stafford Horne <shorne@gmail.com> (odd fixer:OpenRISC TCG CPUs)
Cc: Yoshinori Sato <ysato@users.sourceforge.jp> (reviewer:RENESAS RX CPUs)
Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> (maintainer:SPARC TCG CPUs)
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de> (maintainer:TriCore TCG CPUs)
Cc: Max Filippov <jcmvbkbc@gmail.com> (maintainer:Xtensa TCG CPUs)
Cc: qemu-arm@nongnu.org (open list:ARM TCG CPUs)
Cc: qemu-ppc@nongnu.org (open list:PowerPC TCG CPUs)
Cc: qemu-riscv@nongnu.org (open list:RISC-V TCG CPUs)
Cc: qemu-s390x@nongnu.org (open list:S390 TCG CPUs)
2022-10-04 12:13:12 -07:00
Richard Henderson
25d3ec5831 accel/tcg: Rename CPUIOTLBEntry to CPUTLBEntryFull
This structure will shortly contain more than just
data for accessing MMIO.  Rename the 'addr' member
to 'xlat_section' to more clearly indicate its purpose.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-10-03 20:53:30 -07:00
Jerome Forissier
beeec926d2 target/arm: mark SP_EL1 with ARM_CP_EL3_NO_EL2_KEEP
SP_EL1 must be kept when EL3 is present but EL2 is not. Therefore mark
it with ARM_CP_EL3_NO_EL2_KEEP.

Cc: qemu-stable@nongnu.org
Fixes: 696ba37718 ("target/arm: Handle cpreg registration for missing EL")
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220927120058.670901-1-jerome.forissier@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-09-29 18:01:09 +01:00
Peter Maydell
042e85d14c target/arm: Rearrange cpu64.c so all the CPU initfns are together
cpu64.c has ended up in a slightly odd order -- it starts with the
initfns for most of the models-real-hardware CPUs; after that comes a
bunch of support code for SVE, SME, pauth and LPA2 properties.  Then
come the initfns for the 'host' and 'max' CPU types, and then after
that one more models-real-hardware CPU initfn, for a64fx.  (This
ordering is partly historical and partly required because a64fx needs
the SVE properties.)

Reorder the file into:
 * CPU property support functions
 * initfns for real hardware CPUs
 * initfns for host and max
 * class boilerplate

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-29 17:35:05 +01:00
Peter Maydell
f190bd1da1 target/arm: Update SDCR_VALID_MASK to include SCCD
Our SDCR_VALID_MASK doesn't include all of the bits which are defined
by the current architecture.  In particular in commit 0b42f4fab9 we
forgot to add SCCD, which meant that an AArch32 guest couldn't
actually use the SCCD bit to disable counting in Secure state.

Add all the currently defined bits; we don't implement all of them,
but this makes them be reads-as-written, which is architecturally
valid and matches how we currently handle most of the others in the
mask.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220923123412.1214041-4-peter.maydell@linaro.org
2022-09-29 17:31:52 +01:00
Peter Maydell
80d2b43b2f target/arm: Make writes to MDCR_EL3 use PMU start/finish calls
In commit 01765386a8 we fixed a bug where we weren't correctly
bracketing changes to some registers with pmu_op_start() and
pmu_op_finish() calls for changes which affect whether the PMU
counters might be enabled.  However, we missed the case of writes to
the AArch64 MDCR_EL3 register, because (unlike its AArch32
counterpart) they are currently done directly to the CPU state struct
without going through the sdcr_write() function.

Give MDCR_EL3 a writefn which handles the PMU start/finish calls.
The SDCR writefn then simplfies to "call the MDCR_EL3 writefn after
masking off the bits which don't exist in the AArch32 register".

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220923123412.1214041-3-peter.maydell@linaro.org
2022-09-29 17:31:52 +01:00
Peter Maydell
7f4fbfb5dc target/arm: Mark registers which call pmu_op_start() as ARM_CP_IO
In commit 01765386a8 we made some system register write functions
call pmu_op_start()/pmu_op_finish(). This means that they now touch
timers, so for icount to work these registers must have the ARM_CP_IO
flag set.

This fixes a bug where when icount is enabled a guest that touches
MDCR_EL3, MDCR_EL2, PMCNTENSET_EL0 or PMCNTENCLR_EL0 would cause
QEMU to print an error message and exit, for example:

[    2.495971] TCP: Hash tables configured (established 1024 bind 1024)
[    2.496213] UDP hash table entries: 256 (order: 1, 8192 bytes)
[    2.496386] UDP-Lite hash table entries: 256 (order: 1, 8192 bytes)
[    2.496917] NET: Registered protocol family 1
qemu-system-aarch64: Bad icount read

Reported-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220923123412.1214041-2-peter.maydell@linaro.org
2022-09-29 17:31:52 +01:00
Richard Henderson
a5b5092f66 target/arm: Add is_secure parameter to get_phys_addr_pmsav5
Remove the use of regime_is_secure from get_phys_addr_pmsav5.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-21-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-09-22 16:38:28 +01:00
Richard Henderson
957a0bb751 target/arm: Add secure parameter to get_phys_addr_pmsav7
Remove the use of regime_is_secure from get_phys_addr_pmsav7,
using the new parameter instead.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-19-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-09-22 16:38:28 +01:00
Richard Henderson
1a469cf78d target/arm: Add is_secure parameter to pmsav7_use_background_region
Remove the use of regime_is_secure from pmsav7_use_background_region,
using the new parameter instead.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-17-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-09-22 16:38:28 +01:00
Richard Henderson
be0ca9485d target/arm: Add secure parameter to get_phys_addr_pmsav8
Remove the use of regime_is_secure from get_phys_addr_pmsav8.
Since we already had a local variable named secure, use that.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-16-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-09-22 16:38:27 +01:00
Richard Henderson
71e73beb51 target/arm: Add is_secure parameter to get_phys_addr_v6
Remove the use of regime_is_secure from get_phys_addr_v6,
passing the new parameter to the lookup instead.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-15-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-09-22 16:38:27 +01:00
Richard Henderson
b29c85d50c target/arm: Add is_secure parameter to get_phys_addr_v5
Remove the use of regime_is_secure from get_phys_addr_v5,
passing the new parameter to the lookup instead.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
[PMM: Folded in definition of local is_secure in get_phys_addr(),
 since I dropped the earlier patch that would have provided it]
Message-id: 20220822152741.1617527-14-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-09-22 16:38:27 +01:00
Richard Henderson
e9fb709041 target/arm: Add secure parameter to pmsav8_mpu_lookup
Remove the use of regime_is_secure from pmsav8_mpu_lookup,
passing the new parameter to the lookup instead.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-13-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-09-22 16:38:27 +01:00
Richard Henderson
dbf2a71ad6 target/arm: Add is_secure parameter to v8m_security_lookup
Remove the use of regime_is_secure from v8m_security_lookup,
passing the new parameter to the lookup instead.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-12-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-09-22 16:38:27 +01:00
Richard Henderson
652c750ee5 target/arm: Remove is_subpage argument to pmsav8_mpu_lookup
This can be made redundant with result->page_size, by moving the basic
set of page_size from get_phys_addr_pmsav8.  We still need to overwrite
page_size when v8m_security_lookup signals a subpage.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-11-richard.henderson@linaro.org
[PMM: Update a comment that used to refer to is_subpage]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-09-22 16:38:27 +01:00
Richard Henderson
d2c92e5856 target/arm: Use GetPhysAddrResult in pmsav8_mpu_lookup
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-10-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-09-22 16:38:27 +01:00
Richard Henderson
5272b23e2e target/arm: Use GetPhysAddrResult in get_phys_addr_pmsav8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-9-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-09-22 16:38:27 +01:00
Richard Henderson
e59367e2ef target/arm: Use GetPhysAddrResult in get_phys_addr_pmsav7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-8-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-09-22 16:38:27 +01:00
Richard Henderson
b7b9b579cf target/arm: Use GetPhysAddrResult in get_phys_addr_pmsav5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-7-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-09-22 16:38:27 +01:00
Richard Henderson
51d98ce2cb target/arm: Use GetPhysAddrResult in get_phys_addr_v5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-6-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-09-22 16:38:27 +01:00
Richard Henderson
60a6a180b1 target/arm: Use GetPhysAddrResult in get_phys_addr_v6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-5-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-09-22 16:38:27 +01:00
Richard Henderson
03ee9bbe70 target/arm: Use GetPhysAddrResult in get_phys_addr_lpae
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-09-22 16:38:27 +01:00
Richard Henderson
de05a709ec target/arm: Create GetPhysAddrResult
Combine 5 output pointer arguments from get_phys_addr
into a single struct.  Adjust all callers.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-09-22 16:38:27 +01:00
Clément Chigot
3a661024cc target/arm: Fix alignment for VLD4.32
When requested, the alignment for VLD4.32 is 8 and not 16.

See ARM documentation about VLD4 encoding:
    ebytes = 1 << UInt(size);
    if size == '10' then
        alignment = if a == '0' then 1 else 8;
    else
        alignment = if a == '0' then 1 else 4*ebytes;

Signed-off-by: Clément Chigot <chigot@adacore.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220914105058.2787404-1-chigot@adacore.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-09-22 16:38:27 +01:00
Stefan Hajnoczi
123a32f9b3 Convert m68k to semihosting/syscalls.h.
Convert nios2 to semihosting/syscalls.h.
 Allow optional use of semihosting from userspace.
 -----BEGIN PGP SIGNATURE-----
 
 iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmMh1W8dHHJpY2hhcmQu
 aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV8ptggAimuNN6IiD19Huu5F
 PMjzDqFPvWFOf82O16WTBM1xN0lwVH8+02PYRL3AhOIw9ZTgxezOo9/KXZpr8a8Z
 gocr4Ge/J7zHzHahYuqcyOqqkur2dM4lFiK9rfDD6vdNBMbi0kQZVuaNlQK6rV6Z
 2LHEwKKh64MXJVfwGzK7OLMv4pu0wpWcuCTH2/6U4E1325SOKmEos1VzIePxY1bw
 +AMNnairGEdBX1b3JlzZfrLSaOapJcgl0HZdrg6Mflm6ttTuuykGGtjkWBfcu3Nw
 utNI1zmUYfD/iJbnbsCNpZSLv6LVOQ2l5S6dOWV+JJ1HukVTZu3DoyfTr8t95kwK
 UuUoqA==
 =W7Yh
 -----END PGP SIGNATURE-----

Merge tag 'pull-semi-20220914' of https://gitlab.com/rth7680/qemu into staging

Convert m68k to semihosting/syscalls.h.
Convert nios2 to semihosting/syscalls.h.
Allow optional use of semihosting from userspace.

# -----BEGIN PGP SIGNATURE-----
#
# iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmMh1W8dHHJpY2hhcmQu
# aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV8ptggAimuNN6IiD19Huu5F
# PMjzDqFPvWFOf82O16WTBM1xN0lwVH8+02PYRL3AhOIw9ZTgxezOo9/KXZpr8a8Z
# gocr4Ge/J7zHzHahYuqcyOqqkur2dM4lFiK9rfDD6vdNBMbi0kQZVuaNlQK6rV6Z
# 2LHEwKKh64MXJVfwGzK7OLMv4pu0wpWcuCTH2/6U4E1325SOKmEos1VzIePxY1bw
# +AMNnairGEdBX1b3JlzZfrLSaOapJcgl0HZdrg6Mflm6ttTuuykGGtjkWBfcu3Nw
# utNI1zmUYfD/iJbnbsCNpZSLv6LVOQ2l5S6dOWV+JJ1HukVTZu3DoyfTr8t95kwK
# UuUoqA==
# =W7Yh
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 14 Sep 2022 09:21:51 EDT
# gpg:                using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg:                issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full]
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A  05C0 64DF 38E8 AF7E 215F

* tag 'pull-semi-20220914' of https://gitlab.com/rth7680/qemu:
  target/riscv: Honour -semihosting-config userspace=on and enable=on
  target/xtensa: Honour -semihosting-config userspace=on
  target/nios2: Honour -semihosting-config userspace=on
  target/mips: Honour -semihosting-config userspace=on
  target/m68k: Honour -semihosting-config userspace=on
  target/arm: Honour -semihosting-config userspace=on
  semihosting: Allow optional use of semihosting from userspace
  target/m68k: Convert semihosting errno to gdb remote errno
  target/m68k: Use semihosting/syscalls.h
  target/nios2: Convert semihosting errno to gdb remote errno
  target/nios2: Use semihosting/syscalls.h

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2022-09-17 10:30:34 -04:00
Peter Maydell
e31e0f5661 target/arm: Report FEAT_PMUv3p5 for TCG '-cpu max'
Update the ID registers for TCG's '-cpu max' to report a FEAT_PMUv3p5
compliant PMU.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-11-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell
47b385dae8 target/arm: Support 64-bit event counters for FEAT_PMUv3p5
With FEAT_PMUv3p5, the event counters are now 64 bit, rather than 32
bit.  (Previously, only the cycle counter could be 64 bit, and other
event counters were always 32 bits).  For any given event counter,
whether the overflow event is noted for overflow from bit 31 or from
bit 63 is controlled by a combination of PMCR.LP, MDCR_EL2.HLP and
MDCR_EL2.HPMN.

Implement the 64-bit event counter handling.  We choose to make our
counters always 64 bits, and mask out the top 32 bits on read or
write of PMXEVCNTR for CPUs which don't have FEAT_PMUv3p5.

(Note that the changes to pmenvcntr_op_start() and
pmenvcntr_op_finish() bring their logic closer into line with that of
pmccntr_op_start() and pmccntr_op_finish(), which already had to cope
with the overflow being either at 32 or 64 bits.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-10-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell
0b42f4fab9 target/arm: Implement FEAT_PMUv3p5 cycle counter disable bits
FEAT_PMUv3p5 introduces new bits which disable the cycle
counter from counting:
 * MDCR_EL2.HCCD disables the counter when in EL2
 * MDCR_EL3.SCCD disables the counter when Secure

Add the code to support these bits.

(Note that there is a third documented counter-disable
bit, MDCR_EL3.MCCD, which disables the counter when in
EL3. This is not present until FEAT_PMUv3p7, so is
out of scope for now.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-9-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell
a793bcd027 target/arm: Rename pmu_8_n feature test functions
Our feature test functions that check the PMU version are named
isar_feature_{aa32,aa64,any}_pmu_8_{1,4}.  This doesn't match the
current Arm ARM official feature names, which are FEAT_PMUv3p1 and
FEAT_PMUv3p4.  Rename these functions to _pmuv3p1 and _pmuv3p4.

This commit was created with:
  sed -i -e 's/pmu_8_/pmuv3p/g' target/arm/*.[ch]

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-8-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell
f1dd2506ee target/arm: Detect overflow when calculating next PMU interrupt
In pmccntr_op_finish() and pmevcntr_op_finish() we calculate the next
point at which we will get an overflow and need to fire the PMU
interrupt or set the overflow flag.  We do this by calculating the
number of nanoseconds to the overflow event and then adding it to
qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL).  However, we don't check
whether that signed addition overflows, which can happen if the next
PMU interrupt would happen massively far in the future (250 years or
more).

Since QEMU assumes that "when the QEMU_CLOCK_VIRTUAL rolls over" is
"never", the sensible behaviour in this situation is simply to not
try to set the timer if it would be beyond that point.  Detect the
overflow, and skip setting the timer in that case.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-7-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell
872d20343d target/arm: Honour MDCR_EL2.HPMD in Secure EL2
The logic in pmu_counter_enabled() for handling the 'prohibit event
counting' bits MDCR_EL2.HPMD and MDCR_EL3.SPME is written in a way
that assumes that EL2 is never Secure.  This used to be true, but the
architecture now permits Secure EL2, and QEMU can emulate this.

Refactor the prohibit logic so that we effectively OR together
the various prohibit bits when they apply, rather than trying to
construct an if-else ladder where any particular state of the CPU
ends up in exactly one branch of the ladder.

This fixes the Secure EL2 case and also is a better structure for
adding the PMUv8.5 bits MDCR_EL2.HCCD and MDCR_EL3.SCCD.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-6-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell
b57aa7bdc3 target/arm: Ignore PMCR.D when PMCR.LC is set
The architecture requires that if PMCR.LC is set (for a 64-bit cycle
counter) then PMCR.D (which enables the clock divider so the counter
ticks every 64 cycles rather than every cycle) should be ignored.  We
were always honouring PMCR.D; fix the bug so we correctly ignore it
in this situation.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-5-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell
01765386a8 target/arm: Don't mishandle count when enabling or disabling PMU counters
The PMU cycle and event counter infrastructure design requires that
operations on the PMU register fields are wrapped in pmu_op_start()
and pmu_op_finish() calls (or their more specific pmmcntr and
pmevcntr equivalents).  This includes any changes to registers which
affect whether the counter should be enabled or disabled, but we
forgot to do this.

The effect of this bug is that in sequences like:
 * disable the cycle counter (PMCCNTR) using the PMCNTEN register
 * write a value such as 0xfffff000 to the PMCCNTR
 * restart the counter by writing to PMCNTEN
the value written to the cycle counter is corrupted, and it starts
counting from the wrong place. (Essentially, we fail to record that
the QEMU_CLOCK_VIRTUAL timestamp when the counter should be considered
to have started counting is the point when PMCNTEN is written to enable
the counter.)

Add the necessary bracketing calls, so that updates to the various
registers which affect whether the PMU is counting are handled
correctly.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-4-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell
c117c0649c target/arm: Correct value returned by pmu_counter_mask()
pmu_counter_mask() accidentally returns a value with bits [63:32]
set, because the expression it returns is evaluated as a signed value
that gets sign-extended to 64 bits.  Force the whole expression to be
evaluated with 64-bit arithmetic with ULL suffixes.

The main effect of this bug was that a guest could write to the bits
in the high half of registers like PMCNTENSET_EL0 that are supposed
to be RES0.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-3-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell
76e25d41d4 target/arm: Don't corrupt high half of PMOVSR when cycle counter overflows
When the cycle counter overflows, we are intended to set bit 31 in PMOVSR
to indicate this. However a missing ULL suffix means that we end up
setting all of bits 63-31. Fix the bug.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-2-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell
bb7d902154 target/arm: Add missing space in comment
Fix a missing space before a comment terminator.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-7-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell
3fe72e213e target/arm: Advertise FEAT_ETS for '-cpu max'
The architectural feature FEAT_ETS (Enhanced Translation
Synchronization) is a set of tightened guarantees about memory
ordering involving translation table walks:

 * if memory access RW1 is ordered-before memory access RW2 then it
   is also ordered-before any translation table walk generated by RW2
   that generates a translation fault, address size fault or access
   fault

 * TLB maintenance on non-exec-permission translations is guaranteed
   complete after a DSB (ie it does not need the context
   synchronization event that you have to have if you don’t have
   FEAT_ETS)

For QEMU’s implementation we don’t reorder translation table walk
accesses, and we guarantee to finish the TLB maintenance as soon as
the TLB op is done (the tlb_flush functions will complete at the end
of the TLB, and TLB ops always end the TB because they’re sysreg
writes).

So we’re already compliant and all we need to do is say so in the ID
registers for the 'max' CPU.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-6-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell
d22c564958 target/arm: Implement ID_DFR1
In Armv8.6, a new AArch32 ID register ID_DFR1 is defined; implement
it. We don't have any CPUs with features that they need to advertise
here yet, but plumbing in the ID register gives it the right name
when debugging and will help in future when we do add a CPU that
has non-zero ID_DFR1 fields.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-5-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell
32957aad8c target/arm: Implement ID_MMFR5
In Armv8.6 a new AArch32 ID register ID_MMFR5 is defined.
Implement this; we want to be able to use it to report to
the guest that we implement FEAT_ETS.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-4-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell
62b6f5e2f0 target/arm: Sort KVM reads of AArch32 ID registers into encoding order
The code that reads the AArch32 ID registers from KVM in
kvm_arm_get_host_cpu_features() does so almost but not quite in
encoding order.  Move the read of ID_PFR2 down so it's really in
encoding order.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-3-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:40 +01:00
Peter Maydell
dde4d028dc target/arm: Make cpregs 0, c0, c{3-15}, {0-7} correctly RAZ in v8
In the AArch32 ID register scheme, coprocessor registers with
encoding cp15, 0, c0, c{0-7}, {0-7} are all in the space covered by
what in v6 and v7 was called the "CPUID scheme", and are supposed to
RAZ if they're not allocated to a specific ID register.  For our
pre-v8 CPUs we get this right, because the regdefs in
id_pre_v8_midr_cp_reginfo[] cover these RAZ requirements.  However
for v8 we failed to put in the necessary patterns to cover this, so
we end up UNDEFing on everything we didn't have an ID register for.
This is a problem because in Armv8 some encodings in 0, c0, c3, {0-7}
are now being used for new ID registers, and guests might thus start
trying to read them.  (We already have one of these: ID_PFR2.)

For v8 CPUs, we already have regdefs for 0, c0, c{0-2}, {0-7} (that
is, the space is completely allocated with no reserved spaces).  Add
entries to v8_idregs[] covering 0, c0, c3, {0-7}:
 * c3, {0-2} is the reserved AArch32 space corresponding to the
   AArch64 MVFR[012]_EL1
 * c3, {3,5,6,7} are reserved RAZ for both AArch32 and AArch64
   (in fact some of these are given defined meanings in Armv8.6,
   but we don't implement them yet)
 * c3, 4 is ID_PFR2 (already defined)

We then programmatically add RAZ patterns for AArch32 for
0, c0, c{4..15}, {0-7}:
 * c4-c7 are unused, and not shared with AArch64 (these
   are the encodings corresponding to where the AArch64
   specific ID registers live in the system register space)
 * c8-c15 weren't required to RAZ in v6/v7, but v8 extends
   the AArch32 reserved-should-RAZ space to cover these;
   the equivalent area of the AArch64 sysreg space is not
   defined as must-RAZ

Note that the architecture allows some registers in this space
to return an UNKNOWN value; we always return 0.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-2-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:39 +01:00
Hao Wu
3b16766b5a target/arm: Add cortex-a35
Add cortex A35 core and enable it for virt board.

Signed-off-by: Hao Wu <wuhaotsh@google.com>
Reviewed-by: Joe Komlodi <komlodi@google.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220819002015.1663247-1-wuhaotsh@google.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-14 11:19:39 +01:00
Peter Maydell
19b26317e9 target/arm: Honour -semihosting-config userspace=on
Honour the commandline -semihosting-config userspace=on option,
instead of never permitting userspace semihosting calls in system
emulation mode, by passing the correct value to the is_userspace
argument of semihosting_enabled(), instead of manually checking and
always forbidding semihosting if the guest is in userspace and this
isn't the linux-user build.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822141230.3658237-3-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-13 17:18:21 +01:00
Peter Maydell
5202861b20 semihosting: Allow optional use of semihosting from userspace
Currently our semihosting implementations generally prohibit use of
semihosting calls in system emulation from the guest userspace.  This
is a very long standing behaviour justified originally "to provide
some semblance of security" (since code with access to the
semihosting ABI can do things like read and write arbitrary files on
the host system).  However, it is sometimes useful to be able to run
trusted guest code which performs semihosting calls from guest
userspace, notably for test code.  Add a command line suboption to
the existing semihosting-config option group so that you can
explicitly opt in to semihosting from guest userspace with
 -semihosting-config userspace=on

(There is no equivalent option for the user-mode emulator, because
there by definition all code runs in userspace and has access to
semihosting already.)

This commit adds the infrastructure for the command line option and
adds a bool 'is_user' parameter to the function
semihosting_userspace_enabled() that target code can use to check
whether it should be permitting the semihosting call for userspace.
It mechanically makes all the callsites pass 'false', so they
continue checking "is semihosting enabled in general".  Subsequent
commits will make each target that implements semihosting honour the
userspace=on option by passing the correct value and removing
whatever "don't do this for userspace" checking they were doing by
hand.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822141230.3658237-2-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-13 17:18:21 +01:00
Richard Henderson
306c872103 accel/tcg: Add pc and host_pc params to gen_intermediate_code
Pass these along to translator_loop -- pc may be used instead
of tb->pc, and host_pc is currently unused.  Adjust all targets
at one time.

Acked-by: Alistair Francis <alistair.francis@wdc.com>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Tested-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-06 08:04:26 +01:00
Peter Maydell
2daf518dd1 target/arm: Don't report Statistical Profiling Extension in ID registers
The newly added neoverse-n1 CPU has ID register values which indicate
the presence of the Statistical Profiling Extension, because the real
hardware has this feature.  QEMU's TCG emulation does not yet
implement SPE, though (not even as a minimal stub implementation), so
guests will crash if they try to use it because the SPE system
registers don't exist.

Force ID_AA64DFR0_EL1.PMSVer to 0 in CPU realize for TCG, so that
we don't advertise to the guest a feature that doesn't exist.

(We could alternatively do this by editing the value that
aarch64_neoverse_n1_initfn() sets for this ID register, but
suppressing the field in realize means we won't re-introduce this bug
when we add other CPUs that have SPE in hardware, such as the
Neoverse-V1.)

An example of a non-booting guest is current mainline Linux (5.19),
when booting in EL2 on the virt board (ie with -machine
virtualization=on).

Reported-by: Zenghui Yu <yuzenghui@huawei.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Zenghui Yu <yuzenghui@huawei.com>
Message-id: 20220811131127.947334-1-peter.maydell@linaro.org
2022-08-12 11:17:35 +01:00
Daniel P. Berrangé
977c33ba5d target/arm: display deprecation status in '-cpu help'
When the user queries CPU models via QMP there is a 'deprecated' flag
present, however, this is not done for the CLI '-cpu help' command.

Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
2022-08-05 16:18:15 +01:00
Richard Henderson
0e0c2cf6de target-arm queue:
* Fix KVM SVE ID register probe code
 -----BEGIN PGP SIGNATURE-----
 
 iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmLn8rwZHHBldGVyLm1h
 eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3seqD/sE4YU3qpovlyPhJJWsEFyH
 JRheAwddoj8P/ufOeJVPh85PqGH8zR6MSLSJqzz32ADrN56CFA56c0TRAoL7F6Ru
 iTibwP7hFloDxBCJIYVMZdbSw959LYADYHhdIN7UBkSryCoOC74AraUCwuYqzr9l
 jgh3lnvaH2kj5460XQQYPX4Pkf1jZIV83nhs9kh6GohhuHWtyz9UucDe8VcgMyl2
 9jn7aobLWXI1LJyWTNYJHxQacGn4HK4HbVHczDRgf9PzmjliiTltGvol+T1XGyha
 TGHXMNnMTRbWFz7LCENfEYhup5ScuZbBr5fWh4sBveodczgOActNwmFuy1sempWo
 Cnzy/rwcNREj6EXoKvUkpATKuls9rtH9U4927mesxrk9S3bRJXU4C/EgpAn3qIBZ
 1iFTgSq7eqX+BaYmG1/dtEK+vFX6mhpmKCMhQyRtSFHHibovvlANaNhOHgnPnS0m
 +Bb1pioolo31LLLxBpByOX/MxnXbG+GBnn2kmqX9MLkqamrYQq4g+ITUZcrLReId
 HmvBtYENoiXfReuvT/zRH1nBax97dKrluOgejco2bJrhiYaDgJ94jDMegdoR9mSl
 W/G3QHq18PJ5YOkrjmTn6IFjYNozLEvVqn5VwGXr6QZFxBuivAUoxOELrGULSlba
 OPTBWo2kAbJ8FvKOr3RzhQ==
 =hkV8
 -----END PGP SIGNATURE-----

Merge tag 'pull-target-arm-20220801' of https://git.linaro.org/people/pmaydell/qemu-arm into staging

target-arm queue:
 * Fix KVM SVE ID register probe code

# -----BEGIN PGP SIGNATURE-----
#
# iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmLn8rwZHHBldGVyLm1h
# eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3seqD/sE4YU3qpovlyPhJJWsEFyH
# JRheAwddoj8P/ufOeJVPh85PqGH8zR6MSLSJqzz32ADrN56CFA56c0TRAoL7F6Ru
# iTibwP7hFloDxBCJIYVMZdbSw959LYADYHhdIN7UBkSryCoOC74AraUCwuYqzr9l
# jgh3lnvaH2kj5460XQQYPX4Pkf1jZIV83nhs9kh6GohhuHWtyz9UucDe8VcgMyl2
# 9jn7aobLWXI1LJyWTNYJHxQacGn4HK4HbVHczDRgf9PzmjliiTltGvol+T1XGyha
# TGHXMNnMTRbWFz7LCENfEYhup5ScuZbBr5fWh4sBveodczgOActNwmFuy1sempWo
# Cnzy/rwcNREj6EXoKvUkpATKuls9rtH9U4927mesxrk9S3bRJXU4C/EgpAn3qIBZ
# 1iFTgSq7eqX+BaYmG1/dtEK+vFX6mhpmKCMhQyRtSFHHibovvlANaNhOHgnPnS0m
# +Bb1pioolo31LLLxBpByOX/MxnXbG+GBnn2kmqX9MLkqamrYQq4g+ITUZcrLReId
# HmvBtYENoiXfReuvT/zRH1nBax97dKrluOgejco2bJrhiYaDgJ94jDMegdoR9mSl
# W/G3QHq18PJ5YOkrjmTn6IFjYNozLEvVqn5VwGXr6QZFxBuivAUoxOELrGULSlba
# OPTBWo2kAbJ8FvKOr3RzhQ==
# =hkV8
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 01 Aug 2022 08:35:24 AM PDT
# gpg:                using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg:                issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [full]
# gpg:                 aka "Peter Maydell <pmaydell@gmail.com>" [full]
# gpg:                 aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [full]

* tag 'pull-target-arm-20220801' of https://git.linaro.org/people/pmaydell/qemu-arm:
  target/arm: Move sve probe inside kvm >= 4.15 branch
  target/arm: Set KVM_ARM_VCPU_SVE while probing the host
  target/arm: Use kvm_arm_sve_supported in kvm_arm_get_host_cpu_features

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-08-01 12:00:08 -07:00
Richard Henderson
5265d24c98 target/arm: Move sve probe inside kvm >= 4.15 branch
The test for the IF block indicates no ID registers are exposed, much
less host support for SVE.  Move the SVE probe into the ELSE block.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220726045828.53697-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-08-01 16:21:18 +01:00
Richard Henderson
b9e8d68a39 target/arm: Set KVM_ARM_VCPU_SVE while probing the host
Because we weren't setting this flag, our probe of ID_AA64ZFR0
was always returning zero.  This also obviates the adjustment
of ID_AA64PFR0, which had sanitized the SVE field.

The effects of the bug are not visible, because the only thing that
ID_AA64ZFR0 is used for within qemu at present is tcg translation.
The other tests for SVE within KVM are via ID_AA64PFR0.SVE.

Reported-by: Zenghui Yu <yuzenghui@huawei.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220726045828.53697-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-08-01 16:21:18 +01:00
Richard Henderson
0dd14e9555 target/arm: Use kvm_arm_sve_supported in kvm_arm_get_host_cpu_features
Indication for support for SVE will not depend on whether we
perform the query on the main kvm_state or the temp vcpu.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220726045828.53697-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-08-01 16:21:17 +01:00
Thomas Huth
a07d9df0fd trivial: Fix duplicated words
Some files wrongly contain the same word twice in a row.
One of them should be removed or replaced.

Message-Id: <20220722145859.1952732-1-thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2022-08-01 11:58:02 +02:00
Daniel P. Berrangé
7a21bee2aa misc: fix commonly doubled up words
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20220707163720.1421716-5-berrange@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2022-08-01 11:58:02 +02:00
Cornelia Huck
47c182fe8b kvm: don't use perror() without useful errno
perror() is designed to append the decoded errno value to a
string. This, however, only makes sense if we called something that
actually sets errno prior to that.

For the callers that check for split irqchip support that is not the
case, and we end up with confusing error messages that end in
"success". Use error_report() instead.

Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Message-Id: <20220728142446.438177-1-cohuck@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-07-29 00:15:02 +02:00
Peter Maydell
fca75f60ab target/arm: Add MO_128 entry to pred_esz_masks[]
In commit 7390e0e9ab, we added support for SME loads and stores.
Unlike SVE loads and stores, these include handling of 128-bit
elements.  The SME load/store functions call down into the existing
sve_cont_ldst_elements() function, which uses the element size MO_*
value as an index into the pred_esz_masks[] array.  Because this code
path now has to handle MO_128, we need to add an extra element to the
array.

This bug was spotted by Coverity because it meant we were reading off
the end of the array.

Resolves: Coverity CID 1490539, 1490541, 1490543, 1490544, 1490545,
 1490546, 1490548, 1490549, 1490550, 1490551, 1490555, 1490557,
 1490558, 1490560, 1490561, 1490563
Fixes: 7390e0e9ab ("target/arm: Implement SME LD1, ST1")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220718100144.3248052-1-peter.maydell@linaro.org
2022-07-26 13:38:23 +01:00
Peter Maydell
53ae2fdef1 target/arm: Don't set syndrome ISS for loads and stores with writeback
The architecture requires that for faults on loads and stores which
do writeback, the syndrome information does not have the ISS
instruction syndrome information (i.e. ISV is 0).  We got this wrong
for the load and store instructions covered by disas_ldst_reg_imm9().
Calculate iss_valid correctly so that if the insn is a writeback one
it is false.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1057
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220715123323.1550983-1-peter.maydell@linaro.org
2022-07-18 13:20:14 +01:00
Peter Maydell
f04383e749 target/arm: Honour VTCR_EL2 bits in Secure EL2
In regime_tcr() we return the appropriate TCR register for the
translation regime.  For Secure EL2, we return the VSTCR_EL2 value,
but in this translation regime some fields that control behaviour are
in VTCR_EL2.  When this code was originally written (as the comment
notes), QEMU didn't care about any of those fields, but we have since
added support for features such as LPA2 which do need the values from
those fields.

Synthesize a TCR value by merging in the relevant VTCR_EL2 fields to
the VSTCR_EL2 value.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1103
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220714132303.1287193-8-peter.maydell@linaro.org
2022-07-18 13:20:14 +01:00
Peter Maydell
cb4a0a3444 target/arm: Store TCR_EL* registers as uint64_t
Change the representation of the TCR_EL* registers in the CPU state
struct from struct TCR to uint64_t.  This allows us to drop the
custom vmsa_ttbcr_raw_write() function, moving the "enforce RES0"
checks to their more usual location in the writefn
vmsa_ttbcr_write().  We also don't need the resetfn any more.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220714132303.1287193-7-peter.maydell@linaro.org
2022-07-18 13:20:13 +01:00
Peter Maydell
988cc1909f target/arm: Store VTCR_EL2, VSTCR_EL2 registers as uint64_t
Change the representation of the VSTCR_EL2 and VTCR_EL2 registers in
the CPU state struct from struct TCR to uint64_t.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220714132303.1287193-6-peter.maydell@linaro.org
2022-07-18 13:20:13 +01:00
Peter Maydell
afbb181c2d target/arm: Fix big-endian host handling of VTCR
We have a bug in our handling of accesses to the AArch32 VTCR
register on big-endian hosts: we were not adjusting the part of the
uint64_t field within TCR that the generated code would access.  That
can be done with offsetoflow32(), by using an ARM_CP_STATE_BOTH cpreg
struct, or by defining a full set of read/write/reset functions --
the various other TCR cpreg structs used one or another of those
strategies, but for VTCR we did not, so on a big-endian host VTCR
accesses would touch the wrong half of the register.

Use offsetoflow32() in the VTCR register struct.  This works even
though the field in the CPU struct is currently a struct TCR, because
the first field in that struct is the uint64_t raw_tcr.

None of the other TCR registers have this bug -- either they are
AArch64 only, or else they define resetfn, writefn, etc, and
expect to be passed the full struct pointer.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220714132303.1287193-5-peter.maydell@linaro.org
2022-07-18 13:20:13 +01:00
Peter Maydell
c1547bba7e target/arm: Fold regime_tcr() and regime_tcr_value() together
The only caller of regime_tcr() is now regime_tcr_value(); fold the
two together, and use the shorter and more natural 'regime_tcr'
name for the new function.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220714132303.1287193-4-peter.maydell@linaro.org
2022-07-18 13:20:13 +01:00