The calculation of the length of TLB range invalidate operations
in tlbi_aa64_range_get_length() is incorrect in two ways:
* the NUM field is 5 bits, but we read only 4 bits
* we miscalculate the page_shift value, because of an
off-by-one error:
TG 0b00 is invalid
TG 0b01 is 4K granule size == 4096 == 2^12
TG 0b10 is 16K granule size == 16384 == 2^14
TG 0b11 is 64K granule size == 65536 == 2^16
so page_shift should be (TG - 1) * 2 + 12
Thanks to the bug report submitter Cha HyunSoo for identifying
both these errors.
Fixes: 84940ed825 ("target/arm: Add support for FEAT_TLBIRANGE")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/734
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20211130173257.1274194-1-peter.maydell@linaro.org
Both single-step and pc alignment faults have priority over
breakpoint exceptions.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Misaligned thumb PC is architecturally impossible.
Assert is better than proceeding, in case we've missed
something somewhere.
Expand a comment about aligning the pc in gdbstub.
Fail an incoming migrate if a thumb pc is misaligned.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For A64, any input to an indirect branch can cause this.
For A32, many indirect branch paths force the branch to be aligned,
but BXWritePC does not. This includes the BX instruction but also
other interworking changes to PC. Prior to v8, this case is UNDEFINED.
With v8, this is CONSTRAINED UNPREDICTABLE and may either raise an
exception or force align the PC.
We choose to raise an exception because we have the infrastructure,
it makes the generated code for gen_bx simpler, and it has the
possibility of catching more guest bugs.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We will reuse this section of arm_deliver_fault for
raising pc alignment faults.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The size of the code covered by a TranslationBlock cannot be 0;
this is checked via assert in tb_gen_code.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Create arm_check_ss_active and arm_check_kernelpage.
Reverse the order of the tests. While it doesn't matter in practice,
because only user-only has a kernel page and user-only never sets
ss_active, ss_active has priority over execution exceptions and it
is best to keep them in the proper order.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This reverts commit 9fcd15b919.
This change turns out to cause regressions, for instance on the
imx6ul boards as described here:
https://lore.kernel.org/qemu-devel/c8b89685-7490-328b-51a3-48711c140a84@tribudubois.net/
The primary cause of that regression is that the guest code running
at EL3 expects SMCs (not related to PSCI) to do what they would if
our PSCI emulation was not present at all, but after this change
they instead set a value in R0/X0 and continue.
We could fix that by a refactoring that allowed us to only turn on
the PSCI emulation if we weren't booting the guest at EL3, but there
is a more tangled problem with the highbank board, which:
(1) wants to enable PSCI emulation
(2) has a bit of guest code that it wants to run at EL3 and
to perform SMC calls that trap to the monitor vector table:
this is the boot stub code that is written to memory by
arm_write_secure_board_setup_dummy_smc() and which the
highbank board enables by setting bootinfo->secure_board_setup
We can't satisfy both of those and also have the PSCI emulation
handle all SMC instruction executions regardless of function
identifier value.
This is too tricky to try to sort out before 6.2 is released;
revert this commit so we can take the time to get it right in
the 7.0 release.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20211119163419.557623-1-peter.maydell@linaro.org
Add gdb-xml for MVE
More uses of tcg_constant_* in target/arm
Fix parameter naming for default-bus-bypass-iommu
Ignore cache operations to mmio in HVF
-----BEGIN PGP SIGNATURE-----
iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmGBgjkdHHJpY2hhcmQu
aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV8sAAgAsHaW2sHH/W4TzCwl
DfqFar4u047Q+ZtQHjNehGHF9Bxp4NS4A0qL52vk0hVoqeWlyF1N29MOnewgVDqY
q1x+uxJtG9xjTse7oEEshEEFF/7J8eB8dN4E78TFn/6IhvVhGiUeeRu29s44Ot6N
E2KABcXfd+4gEdqhepLGEbi5n0TnA8ARmmeffZNWVEbsxQjHnMQQYmqGmllB3xV3
qPpnp3avvD1015zMwrLVmlDO+tSRr/1bed7k3k26ebga2B/zitxcpXFNCDlgePx0
LNT5QYvBDpE7HOruGQjf4iXPJHfYw5VMtopK7K++rY9KWiJgBVSjQUcB462sdCPk
wNAp0g==
=vlZ5
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/rth/tags/pull-arm-20211102-2' into staging
Add nuvoton sd module for NPCM7XX
Add gdb-xml for MVE
More uses of tcg_constant_* in target/arm
Fix parameter naming for default-bus-bypass-iommu
Ignore cache operations to mmio in HVF
# gpg: Signature made Tue 02 Nov 2021 02:23:53 PM EDT
# gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg: issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [ultimate]
* remotes/rth/tags/pull-arm-20211102-2:
hvf: arm: Ignore cache operations on MMIO
hw/arm/virt: Rename default_bus_bypass_iommu
target/arm: Use tcg_constant_i32() in gen_rev16()
target/arm: Use tcg_constant_i64() in do_sat_addsub_64()
target/arm: Use the constant variant of store_cpu_field() when possible
target/arm: Introduce store_cpu_field_constant() helper
target/arm: Use tcg_constant_i32() in op_smlad()
target/arm: Advertise MVE to gdb when present
tests/qtest/libqos: add SDHCI commands
hw/arm: Attach MMC to quanta-gbs-bmc
hw/arm: Add Nuvoton SD module to board
hw/sd: add nuvoton MMC
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Apple's Hypervisor.Framework forwards cache operations as MMIO traps
into user space. For MMIO however, these have no meaning: There is no
cache attached to them.
So let's just treat cache data exits as nops.
This fixes OpenBSD booting as guest.
Reported-by: AJ Barris <AwlsomeAlex@github.com>
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Mark Kettenis <kettenis@openbsd.org>
Reference: https://github.com/utmapp/UTM/issues/3197
Message-Id: <20211026071241.74889-1-agraf@csgraf.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Since the mask is a constant value, use tcg_constant_i32()
instead of a TCG temporary.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211029231834.2476117-6-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The immediate value used for comparison is constant and
read-only. Move it to the constant pool. This frees a
TCG temporary for unsigned saturation opcodes.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211029231834.2476117-5-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
When using a constant variable, we can replace the store_cpu_field()
call by store_cpu_field_constant() which avoid using TCG temporaries.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211029231834.2476117-4-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Similarly to the store_cpu_field() helper which takes a TCG
temporary, store its value to the CPUState, introduce the
store_cpu_field_constant() helper which store a constant to
CPUState (without using any TCG temporary).
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211029231834.2476117-3-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Avoid using a TCG temporary for a read-only constant.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211029231834.2476117-2-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Cortex-M CPUs with MVE should advertise this fact to gdb, using the
org.gnu.gdb.arm.m-profile-mve XML feature, which defines the VPR
register. Presence of this feature also tells gdb to create
pseudo-registers Q0..Q7, so we do not need to tell gdb about them
separately.
Note that unless you have a very recent GDB that includes this fix:
http://patches-tcwg.linaro.org/patch/58133/ gdb will mis-print the
individual fields of the VPR register as zero (but showing the whole
thing as hex, eg with "print /x $vpr" will give the correct value).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20211101160814.5103-1-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Because of the complexity of setting ESR, re-use the existing
arm_cpu_do_unaligned_access function. This means we have to
handle the exception ourselves in cpu_loop, transforming it
to the appropriate signal.
Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Because of the complexity of setting ESR, continue to use
arm_deliver_fault. This means we cannot remove the code
within cpu_loop that decodes EXCP_DATA_ABORT and
EXCP_PREFETCH_ABORT.
But using the new hook means that we don't have to do the
page_get_flags check manually, and we'll be able to restrict
the tlb_fill hook to sysemu later.
Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Use the new os interface for raising the exception,
rather than calling arm_cpu_tlb_fill directly.
Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The named function no longer exists.
Refer to host_signal_handler instead.
Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The helper_*_mmu functions were the only thing available
when this code was written. This could have been adjusted
when we added cpu_*_mmuidx_ra, but now we can most easily
use the newest set of interfaces.
Cc: qemu-arm@nongnu.org
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The previous placement in tcg/tcg.h was not logical.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We're about to move this out of tcg.h, so rename it
as we did when moving MemOp.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We have lacked expressive support for memory sizes larger
than 64-bits for a while. Fixing that requires adjustment
to several points where we used this for array indexing,
and two places that develop -Wswitch warnings after the change.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Provide a name field for all the memory listeners. It can be used to identify
which memory listener is which.
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20210817013553.30584-2-peterx@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Currently we send VFP XML which includes D0..D15 or D0..D31, plus
FPSID, FPSCR and FPEXC. The upstream GDB tolerates this, but its
definition of this XML feature does not include FPSID or FPEXC. In
particular, for M-profile cores there are no FPSID or FPEXC
registers, so advertising those is wrong.
Move FPSID and FPEXC into their own bit of XML which we only send for
A and R profile cores. This brings our definition of the XML
org.gnu.gdb.arm.vfp feature into line with GDB's own (at least for
non-Neon cores...) and means we don't claim to have FPSID and FPEXC
on M-profile.
(It seems unlikely to me that any gdbstub users really care about
being able to look at FPEXC and FPSID; but we've supplied them to gdb
for a decade and it's not hard to keep doing so.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210921162901.17508-5-peter.maydell@linaro.org
Currently helper.c includes some code which is part of the arm
target's gdbstub support. This code has a better home: in gdbstub.c
and gdbstub64.c. Move it there.
Because aarch64_fpu_gdb_get_reg() and aarch64_fpu_gdb_set_reg() move
into gdbstub64.c, this means that they're now compiled only for
TARGET_AARCH64 rather than always. That is the only case when they
would ever be used, but it does mean that the ifdef in
arm_cpu_register_gdb_regs_for_features() needs to be adjusted to
match.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210921162901.17508-4-peter.maydell@linaro.org
We're going to move this code to a different file; fix the coding
style first so checkpatch doesn't complain. This includes deleting
the spurious 'break' statements after returns in the
vfp_gdb_get_reg() function.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210921162901.17508-3-peter.maydell@linaro.org
The SMCCC 1.3 spec section 5.2 says
The Unknown SMC Function Identifier is a sign-extended value of (-1)
that is returned in the R0, W0 or X0 registers. An implementation must
return this error code when it receives:
* An SMC or HVC call with an unknown Function Identifier
* An SMC or HVC call for a removed Function Identifier
* An SMC64/HVC64 call from AArch32 state
To comply with these statements, let's always return -1 when we encounter
an unknown HVC or SMC call.
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
While we may have had some thought of allowing system-mode
to return from this hook, we have no guests that require this.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
There is nothing target specific about this. The implementation
is host specific, but the declaration is 100% common.
Reviewed-By: Warner Losh <imp@bsdimp.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Optimize the MVE 1op-immediate insns (VORR, VBIC, VMOV) to
use TCG vector ops when possible.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210913095440.13462-13-peter.maydell@linaro.org
Optimize the MVE shift-and-insert insns by using TCG
vector ops when possible.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210913095440.13462-12-peter.maydell@linaro.org
Optimize the MVE VSHLL insns by using TCG vector ops when possible.
This includes the VMOVL insn, which we handle in mve.decode as "VSHLL
with zero shift count".
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210913095440.13462-11-peter.maydell@linaro.org
Optimize the MVE VSHL and VSHR immediate forms by using TCG vector
ops when possible.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210913095440.13462-10-peter.maydell@linaro.org
Optimize the MVE VMVN insn by using TCG vector ops when possible.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210913095440.13462-9-peter.maydell@linaro.org
Optimize the MVE VDUP insns by using TCG vector ops when possible.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210913095440.13462-8-peter.maydell@linaro.org
Optimize the MVE VNEG and VABS insns by using TCG
vector ops when possible.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210913095440.13462-7-peter.maydell@linaro.org
Optimize MVE arithmetic ops when we have a TCG
vector operation we can use.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210913095440.13462-6-peter.maydell@linaro.org
When not predicating, implement the MVE bitwise logical insns
directly using TCG vector operations.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210913095440.13462-5-peter.maydell@linaro.org
Our current codegen for MVE always calls out to helper functions,
because some byte lanes might be predicated. The common case is that
in fact there is no predication active and all lanes should be
updated together, so we can produce better code by detecting that and
using the TCG generic vector infrastructure.
Add a TB flag that is set when we can guarantee that there is no
active MVE predication, and a bool in the DisasContext. Subsequent
patches will use this flag to generate improved code for some
instructions.
In most cases when the predication state changes we simply end the TB
after that instruction. For the code called from vfp_access_check()
that handles lazy state preservation and creating a new FP context,
we can usually avoid having to try to end the TB because luckily the
new value of the flag following the register changes in those
sequences doesn't depend on any runtime decisions. We do have to end
the TB if the guest has enabled lazy FP state preservation but not
automatic state preservation, but this is an odd corner case that is
not going to be common in real-world code.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210913095440.13462-4-peter.maydell@linaro.org
Architecturally, for an M-profile CPU with the LOB feature the
LTPSIZE field in FPDSCR is always constant 4. QEMU's implementation
enforces this everywhere, except that we don't check that it is true
in incoming migration data.
We're going to add come in gen_update_fp_context() which relies on
the "always 4" property. Since this is TCG-only, we don't actually
need to be robust to bogus incoming migration data, and the effect of
it being wrong would be wrong code generation rather than a QEMU
crash; but if it did ever happen somehow it would be very difficult
to track down the cause. Add a check so that we fail the inbound
migration if the FPDSCR.LTPSIZE value is incorrect.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210913095440.13462-3-peter.maydell@linaro.org
Currently gen_jmp_tb() assumes that if it is called then the jump it
is handling is the only reason that we might be trying to end the TB,
so it will use goto_tb if it can. This is usually the case: mostly
"we did something that means we must end the TB" happens on a
non-branch instruction. However, there are cases where we decide
early in handling an instruction that we need to end the TB and
return to the main loop, and then the insn is a complex one that
involves gen_jmp_tb(). For instance, for M-profile FP instructions,
in gen_preserve_fp_state() which is called from vfp_access_check() we
want to force an exit to the main loop if lazy state preservation is
active and we are in icount mode.
Make gen_jmp_tb() look at the current value of is_jmp, and only use
goto_tb if the previous is_jmp was DISAS_NEXT or DISAS_TOO_MANY.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210913095440.13462-2-peter.maydell@linaro.org
We can expose cycle counters on the PMU easily. To be as compatible as
possible, let's do so, but make sure we don't expose any other architectural
counters that we can not model yet.
This allows OSs to work that require PMU support.
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20210916155404.86958-10-agraf@csgraf.de
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>