Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230628071202.230991-2-richard.henderson@linaro.org>
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
The microblaze architecture does not reorder instructions.
While there is an MBAR wait-for-data-access instruction,
this concerns synchronizing with DMA.
This should have been defined when enabling MTTCG.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar@zeroasic.com>
Fixes: d449561b13 ("configure: microblaze: Enable mttcg")
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230621135633.1649-4-anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* Validate cluster and NUMA node boundary on ARM and RISC-V
* various small TCG features from newer processors
* Remove dubious 'event_notifier-posix.c' include
* fix git-submodule.sh in releases
-----BEGIN PGP SIGNATURE-----
iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmSZS0IUHHBib256aW5p
QHJlZGhhdC5jb20ACgkQv/vSX3jHroN+tgf/axJdG9NXKCyXgc0vzjKVhSR4Y+tC
EPxkg7Rq7uOMgbph9oTS/2Kzh9LnP6kLt2qnS4igRHGuEBd58yD6fFNDv0LJsK/l
B/d0WGHMKV0KMYOX24rkyfohVu37GhVRsiVSIlIiQVTC9JtYer7WxdnyoDaPKvY8
dpbKgDrd59vAlsHrpj7ZubVQPcL3lXrLryimpDohMH6Ba+4wZq+7dKPpal97QOP2
3i7isUBTQiMOcVjW6GEiNcDLSJqj5DSgylhdFnaBsq/ThpC2PxWoXcCbV28QELzf
5+J+RXQavmeWKZMR0q98iBzWbrsVtaSxAkHHiwbUMMqQvkfY6Dpo5dMHWw==
=WHE2
-----END PGP SIGNATURE-----
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
* kvm: reuse per-vcpu stats fd to avoid vcpu interruption
* Validate cluster and NUMA node boundary on ARM and RISC-V
* various small TCG features from newer processors
* Remove dubious 'event_notifier-posix.c' include
* fix git-submodule.sh in releases
# -----BEGIN PGP SIGNATURE-----
#
# iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmSZS0IUHHBib256aW5p
# QHJlZGhhdC5jb20ACgkQv/vSX3jHroN+tgf/axJdG9NXKCyXgc0vzjKVhSR4Y+tC
# EPxkg7Rq7uOMgbph9oTS/2Kzh9LnP6kLt2qnS4igRHGuEBd58yD6fFNDv0LJsK/l
# B/d0WGHMKV0KMYOX24rkyfohVu37GhVRsiVSIlIiQVTC9JtYer7WxdnyoDaPKvY8
# dpbKgDrd59vAlsHrpj7ZubVQPcL3lXrLryimpDohMH6Ba+4wZq+7dKPpal97QOP2
# 3i7isUBTQiMOcVjW6GEiNcDLSJqj5DSgylhdFnaBsq/ThpC2PxWoXcCbV28QELzf
# 5+J+RXQavmeWKZMR0q98iBzWbrsVtaSxAkHHiwbUMMqQvkfY6Dpo5dMHWw==
# =WHE2
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 26 Jun 2023 10:24:34 AM CEST
# gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg: issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [undefined]
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [undefined]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* tag 'for-upstream' of https://gitlab.com/bonzini/qemu:
git-submodule.sh: allow running in validate mode without previous update
target/i386: implement SYSCALL/SYSRET in 32-bit emulators
target/i386: implement RDPID in TCG
target/i386: sysret and sysexit are privileged
target/i386: AMD only supports SYSENTER/SYSEXIT in 32-bit mode
target/i386: Intel only supports SYSCALL/SYSRET in long mode
target/i386: TCG supports WBNOINVD
target/i386: TCG supports XSAVEERPTR
target/i386: do not accept RDSEED if CPUID bit absent
target/i386: TCG supports RDSEED
target/i386: TCG supports 3DNow! prefetch(w)
target/i386: fix INVD vmexit
kvm: reuse per-vcpu stats fd to avoid vcpu interruption
hw/riscv: Validate cluster and NUMA node boundary
hw/arm: Validate cluster and NUMA node boundary
numa: Validate cluster and NUMA node boundary if required
hw/remote/proxy: Remove dubious 'event_notifier-posix.c' include
build: further refine build.ninja rules
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
AMD supports both 32-bit and 64-bit SYSCALL/SYSRET, but the TCG only
exposes it for 64-bit targets. For system emulation just reuse the
helper; for user-mode emulation the ABI is the same as "int $80".
The BSDs does not support any fast system call mechanism in 32-bit
mode so add to bsd-user the same stub that FreeBSD has for 64-bit
compatibility mode.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
RDPID corresponds to a RDMSR(TSC_AUX); however, it is unprivileged
so for user-mode emulation we must provide the value that the kernel
places in the MSR. For Linux, it is a combination of the current CPU
and the current NUMA node, both of which can be retrieved with getcpu(2).
Also try sched_getcpu(), which might be there on the BSDs. If there is
no portable way to retrieve the current CPU id from userspace, return 0.
RDTSCP is reimplemented as RDTSC + RDPID ECX; the differences in terms
of serializability are not relevant to QEMU.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
WBNOINVD is the same as INVD or WBINVD as far as TCG is concerned,
since there is no cache in TCG and therefore no invalidation side effect
in WBNOINVD.
With respect to SVM emulation, processors that do not support WBNOINVD
will ignore the prefix and treat it as WBINVD, while those that support
it will generate exactly the same vmexit.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
XSAVEERPTR is actually a fix for an errata; TCG does not have the issue.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
TCG implements RDSEED, and in fact uses qcrypto_random_bytes which is
secure enough to match hardware behavior. Expose it to guests.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The AMD prefetch(w) instructions have not been deprecated together with the rest
of 3DNow!, and in fact are even supported by newer Intel processor. Mark them
as supported by TCG, as it supports all of 3DNow!.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Due to a typo or perhaps a brain fart, the INVD vmexit was never generated.
Fix it (but not that fixing just the typo would break both INVD and WBINVD,
due to a case of two wrongs making a right).
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Doorbells in SMT need to coordinate msgsnd/msgclr and DPDES access from
multiple threads that affect the same state.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
A relatively simple case to begin with, CTRL is a SMT shared register
where reads and writes need to synchronise against state changes by
other threads in the core.
Atomic serialisation operations are used to achieve this.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
TGC SMT emulation needs to know whether it is running with SMT siblings,
to be able to iterate over siblings in a core, and to serialise
threads to access per-core shared SPRs. Add infrastructure to do these
things.
For now the sibling iteration and serialisation are implemented in a
simple but inefficient way. SMT shared state and sibling access is not
too common, and SMT configurations are mainly useful to test system
code, so performance is not to critical.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
[ clg: fix build breakage with clang ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The top bits of the LEV field of the sc instruction are to be treated as
as a reserved field rather than a reserved value, meaning LEV is
effectively the bottom bit. LEV=0xF should be treated as LEV=1 and be
a hypercall, for example.
This changes the instruction execution to just set lev from the low bit
of the field. Processors which don't support the LEV field will continue
to ignore it.
ISA v3.1 defines LEV to be 2 bits, in order to add the 'sc 2' ultracall
instruction. TCG does not support Ultravisor, so don't worry about
that bit.
Suggested-by: "Harsh Prateek Bora" <harshpb@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The CTRL register is able to write the bit in the RUN field, which gets
reflected into the TS field which is read-only and contains the state of
the RUN field for all threads in the core.
TCG does not implement SMT, so the correct implementation just requires
mirroring the RUN bit into the first bit of the TS field.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
System call interrupts in ISA v3.1 CPUs add a LEV indication in SRR1
that corresponds with the LEV field of the instruction that caused the
interrupt.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The hypervisor emulation assistance interrupt modifies HEIR to
contain the value of the instruction which caused the exception.
Only TCG raises HEAI interrupts so this can be made TCG-only.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
ISA v3.1 introduced prefix instructions. Among the changes, various
synchronous interrupts report whether they were caused by a prefix
instruction in (H)SRR1.
The case of instruction fetch that causes an HDSI due to access of a
process-scoped table faulting on the partition scoped translation is the
tricky one. As with ISIs and HISIs, this does not try to set the prefix
bit because there is no instruction image to be loaded. The HDSI needs
the originating access type to be passed through to the handler to
distinguish this from HDSIs that fault translating process scoped tables
originating from a load or store instruction (in that case the prefix
bit should be provided).
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[ clg: checkpatch issues ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Rather than always performing partition scope page table translation
with access type of 0 (MMU_DATA_LOAD), pass through the processor
access type which first initiated the translation sequence. Process-
scoped page table loads are then set to MMU_DATA_LOAD access type in
the xlate function.
This will allow more information to be passed to the exception
handler in the next patch.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
powerpc ifetch endianness depends on MSR[LE] so it has to byteswap
after cpu_ldl_code(). This corrects DSISR bits in alignment
interrupts when running in little endian mode.
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
When the Timer Control and Timer Status registers are modified, avoid
calling the KVM backend when not available
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Make sure each CPU gets its state set up for gdb, not just the ones
before PowerPCCPUClass has had its gdb state set up.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Brown bag time: store instead of load results in uninitialized temp.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1704
Reported-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620134659.817559-1-richard.henderson@linaro.org
Fixes: e6dd5e782b ("target/arm: Use tcg_gen_qemu_{ld, st}_i128 in gen_sve_{ld, st}r")
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
One cannot test for feature aa32_simd_r32 without first
testing if AArch32 mode is supported at all. This leads to
qemu-system-aarch64: ARM CPUs must have both VFP-D32 and Neon or neither
for Apple M1 cpus.
We already have a check for ARMv8-A never setting vfp-d32 true,
so restructure the code so that AArch64 avoids the test entirely.
Reported-by: Mads Ynddal <mads@ynddal.dk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Mads Ynddal <m.ynddal@samsung.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Mads Ynddal <m.ynddal@samsung.com>
Message-id: 20230619140216.402530-1-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add an x-rme cpu property to enable FEAT_RME.
Add an x-l0gptsz property to set GPCCR_EL3.L0GPTSZ,
for testing various possible configurations.
We're not currently completely sure whether FEAT_RME will
be OK to enable purely as a CPU-level property, or if it will
need board co-operation, so we're making these experimental
x- properties, so that the people developing the system
level software for RME can try to start using this and let
us know how it goes. The command line syntax for enabling
this will change in future, without backwards-compatibility.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-21-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Place the check at the end of get_phys_addr_with_struct,
so that we check all physical results.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-20-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Handle GPC Fault types in arm_deliver_fault, reporting as
either a GPC exception at EL3, or falling through to insn
or data aborts at various exception levels.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-19-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The function takes the fields as filled in by
the Arm ARM pseudocode for TakeGPCException.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-18-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This fixes a bug in which we failed to initialize
the result attributes properly after the memset.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-17-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Instead of passing this to get_phys_addr_lpae, stash it
in the S1Translate structure.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-16-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Do not provide a fast-path for physical addresses,
as those will need to be validated for GPC.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
While Root and Realm may read and write data from other spaces,
neither may execute from other pa spaces.
This happens for Stage1 EL3, EL2, EL2&0, and Stage2 EL1&0.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
With Realm security state, bit 55 of a block or page descriptor during
the stage2 walk becomes the NS bit; during the stage1 walk the bit 5
NS bit is RES0. With Root security state, bit 11 of the block or page
descriptor during the stage1 walk becomes the NSE bit.
Rather than collecting an NS bit and applying it later, compute the
output pa space from the input pa space and unconditionally assign.
This means that we no longer need to adjust the output space earlier
for the NSTable bit.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Test in_space instead of in_secure so that we don't
switch out of Root space.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add input and output space members to S1Translate. Set and adjust
them in S1_ptw_translate, and the various points at which we drop
secure state. Initialize the space in get_phys_addr; for now leave
get_phys_addr_with_secure considering only secure vs non-secure spaces.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This was added in 7e98e21c09 as part of a reorg in which
one of the argument had been legally NULL, and this caught
actual instances. Now that the reorg is complete, this
serves little purpose.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
With FEAT_RME, there are four physical address spaces.
For now, just define the symbols, and mention them in
the same spots as the other Phys indexes in ptw.c.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
It will be helpful to have ARMMMUIdx_Phys_* to be in the same
relative order as ARMSecuritySpace enumerators. This requires
the adjustment to the nstable check. While there, check for being
in secure state rather than rely on clearing the low bit making
no change to non-secure state.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Introduce both the enumeration and functions to retrieve
the current state, and state outside of EL3.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This includes GPCCR, GPTBR, MFAR, the TLB flush insns PAALL, PAALLOS,
RPALOS, RPAOS, and the cache flush insns CIPAPA and CIGDPAPA.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
With RME, SEL2 must also be present to support secure state.
The NS bit is RES1 if SEL2 is not present.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Define the missing SCR and HCR bits, allow SCR_NSE and {SCR,HCR}_GPF
to be set, and invalidate TLBs when NSE changes.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add the missing field for ID_AA64PFR0, and the predicate.
Disable it if EL3 is forced off by the board or command-line.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
from ISA v1.6.1 onwards the bit position of ICR.IE changed.
ctx->icr_ie_offset contains the correct value for the ISA version used
by the vCPU. We also need to exit this tb here, as we might have enabled
interrupts.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Message-Id: <20230621142302.1648383-9-kbastian@mail.uni-paderborn.de>
the CPU can change the privilege level by writing the corresponding bits
in PSW. If this happens all instructions after this 'mtcr' in the TB are
translated with the wrong privilege level. So we have to exit to the
cpu_loop() and start translating again with the new privilege level.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Message-Id: <20230621142302.1648383-8-kbastian@mail.uni-paderborn.de>
so we can recognize exceptions after re-enabling interrupts.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reported-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Message-Id: <20230621142302.1648383-4-kbastian@mail.uni-paderborn.de>
this replaces all calls to tcg_gen_exit_tb() and moves them to
tricore_tb_stop().
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Message-Id: <20230621142302.1648383-3-kbastian@mail.uni-paderborn.de>
if A[r1] == A[11], then we would overwrite the destination address of
the jump with the return address.
Reported-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Message-Id: <20230621142302.1648383-2-kbastian@mail.uni-paderborn.de>
We are always taking the TRICORE_FEATURE_13 branch as every CPU has TRICORE_FEATURE_13.
For CPUs with ISA > 1.3 we have to take the else branch.
We fix this by inverting the condition. We check for
TRICORE_FEATURE_131, which every CPU except TRICORE_FEATURE_13 CPUs
have.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1700
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Message-Id: <20230612113245.56667-5-kbastian@mail.uni-paderborn.de>
some insns were not checking if an even index was used to access a 64
bit register. In the worst case that could lead to a buffer overflow as
reported in https://gitlab.com/qemu-project/qemu/-/issues/1698.
Reported-by: Siqi Chen <coc.cyqh@gmail.com>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Message-Id: <20230612113245.56667-4-kbastian@mail.uni-paderborn.de>
we don't want to save PSW.CDC to the CSA, but PSW.CDE must be saved.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1699
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Message-Id: <20230612113245.56667-3-kbastian@mail.uni-paderborn.de>
When translating "imask" instruction of Tricore architecture, QEMU did not check whether the register index was out of bounds, resulting in a global-buffer-overflow.
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1698
Reported-by: Siqi Chen <coc.cyqh@gmail.com>
Signed-off-by: Siqi Chen <coc.cyqh@gmail.com>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Message-Id: <20230612065633.149152-1-coc.cyqh@gmail.com>
Message-Id: <20230612113245.56667-2-kbastian@mail.uni-paderborn.de>
this variant saves the 'IE' bit to a 'd' register. The 'IE' bitfield
changed from ISA version 1.6.1, so we add icr_ie_offset to DisasContext
as with the other DISABLE insn.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Message-Id: <20230614100039.1337971-9-kbastian@mail.uni-paderborn.de>
we also introduce the tc37x CPU that implements that ISA version.
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Message-Id: <20230614100039.1337971-2-kbastian@mail.uni-paderborn.de>
We use the user_ss[] array to hold the user emulation sources,
and the softmmu_ss[] array to hold the system emulation ones.
Hold the latter in the 'system_ss[]' array for parity with user
emulation.
Mechanical change doing:
$ sed -i -e s/softmmu_ss/system_ss/g $(git grep -l softmmu_ss)
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230613133347.82210-10-philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Since we *might* have user emulation with softmmu,
use the clearer 'CONFIG_SYSTEM_ONLY' key to check
for system emulation.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230613133347.82210-9-philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Since we *might* have user emulation with softmmu,
replace the system emulation check by !user emulation one.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20230613133347.82210-5-philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Since we *might* have user emulation with softmmu,
replace the system emulation check by !user emulation one.
Invert some if() ladders for clarity.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230613133347.82210-4-philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We don't build any user emulation target for Tricore,
only the system emulation. No need to check for it as
it is always defined.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Message-Id: <20230613133347.82210-3-philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Since cpu_mmu_index() is well-defined for user-only,
we can remove the surrounding #ifdef'ry entirely.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230613133347.82210-2-philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Convert the instructions in the load/store memory tags instruction
group to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-21-peter.maydell@linaro.org
Convert the ASIMD load/store single structure insns to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230602155223.2040685-20-peter.maydell@linaro.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Convert the instructions in the ASIMD load/store multiple structures
instruction classes to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-19-peter.maydell@linaro.org
Convert the instructions in the LDAPR/STLR (unscaled immediate)
group to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-18-peter.maydell@linaro.org
Convert the instructions in the load/store register (pointer
authentication) group ot decodetree: LDRAA, LDRAB.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-17-peter.maydell@linaro.org
Convert the insns in the atomic memory operations group to
decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-16-peter.maydell@linaro.org
Convert the LDR and STR instructions which take a register
plus register offset to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-15-peter.maydell@linaro.org
Convert the LDR and STR instructions which use a 12-bit immediate
offset to decodetree. We can reuse the existing LDR and STR
trans functions for these.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-14-peter.maydell@linaro.org
Convert the load and store instructions which use a 9-bit
immediate offset to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-13-peter.maydell@linaro.org
Convert the "Load register (literal)" instruction class to
decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-11-peter.maydell@linaro.org
Convert the instructions in the load/store exclusive (STXR,
STLXR, LDXR, LDAXR) and load/store ordered (STLR, STLLR,
LDAR, LDLAR) to decodetree.
Note that for STLR, STLLR, LDAR, LDLAR this fixes an under-decoding
in the legacy decoder where we were not checking that the RES1 bits
in the Rs and Rt2 fields were set.
The new function ldst_iss_sf() is equivalent to the existing
disas_ldst_compute_iss_sf(), but it takes the pre-decoded 'ext' field
rather than taking an undecoded two-bit opc field and extracting
'ext' from it. Once all the loads and stores have been converted
to decodetree disas_ldst_compute_iss_sf() will be unused and
can be deleted.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-9-peter.maydell@linaro.org
Convert the exception generation instructions SVC, HVC, SMC, BRK and
HLT to decodetree.
The old decoder decoded the halting-debug insnns DCPS1, DCPS2 and
DCPS3 just in order to then make them UNDEF; as with DRPS, we don't
bother to decode them, but document the patterns in a64.decode.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-8-peter.maydell@linaro.org
Convert MSR (reg), MRS, SYS, SYSL to decodetree. For QEMU these are
all essentially the same instruction (system register access).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-7-peter.maydell@linaro.org
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Convert the MSR (immediate) insn to decodetree. Our implementation
has basically no commonality between the different destinations,
so we decode the destination register in a64.decode.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-6-peter.maydell@linaro.org
Convert the CFINV, XAFLAG and AXFLAG insns to decodetree.
The old decoder handles these in handle_msr_i(), but
the architecture defines them as separate instructions
from MSR (immediate).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-5-peter.maydell@linaro.org
Convert the insns in the "Barriers" instruction class to
decodetree: CLREX, DSB, DMB, ISB and SB.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-4-peter.maydell@linaro.org
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Convert the various instructions in the hint instruction space
to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-3-peter.maydell@linaro.org
In the recent refactoring we missed a few places which should be
calling finalize_memop_asimd() for ASIMD loads and stores but
instead are just calling finalize_memop(); fix these.
For the disas_ldst_single_struct() and disas_ldst_multiple_struct()
cases, this is not a behaviour change because there the size
is never MO_128 and the two finalize functions do the same thing.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
In disas_ldst_reg_imm9() we missed one place where a call to
a gen_mte_check* function should now be passed the memop we
have created rather than just being passed the size. Fix this.
Fixes: 0a9091424d ("target/arm: Pass memop to gen_mte_check1*")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
The LDG instruction loads the tag from a memory address (identified
by [Xn + offset]), and then merges that tag into the destination
register Xt. We implemented this correctly for the case when
allocation tags are enabled, but didn't get it right when ATA=0:
instead of merging the tag bits into Xt, we merged them into the
memory address [Xn + offset] and then set Xt to that.
Merge the tag bits into the old Xt value, as they should be.
Cc: qemu-stable@nongnu.org
Fixes: c15294c1e3 ("target/arm: Implement LDG, STG, ST2G instructions")
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The atomic memory operations are supposed to return the old memory
data value in the destination register. This value is not
sign-extended, even if the operation is the signed minimum or
maximum. (In the pseudocode for the instructions the returned data
value is passed to ZeroExtend() to create the value in the register.)
We got this wrong because we were doing a 32-to-64 zero extend on the
result for 8 and 16 bit data values, rather than the correct amount
of zero extension.
Fix the bug by using ext8u and ext16u for the MO_8 and MO_16 data
sizes rather than ext32u.
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-2-peter.maydell@linaro.org
-----BEGIN PGP SIGNATURE-----
iLMEAAEIAB0WIQS4/x2g0v3LLaCcbCxAov/yOSY+3wUCZIwysgAKCRBAov/yOSY+
39FYA/465KtY2jDt4xG6AdwZDHckfxZQWlrfhyZvtapOkUG4AprOBV2nSS/ukyD4
V8bg2/6cLS0GRKfDsqA3DcxSASWCAggIU4fTSj+DlYOZhNUIq14qzwqciZnO5CIH
QDczSqu2LKRdP9j6MCtzIaZq/8pPDcOlgm7Dyct/kDo/64E2sg==
=rD4j
-----END PGP SIGNATURE-----
Merge tag 'pull-loongarch-20230616' of https://gitlab.com/gaosong/qemu into staging
pull-loongarch-20230616
# -----BEGIN PGP SIGNATURE-----
#
# iLMEAAEIAB0WIQS4/x2g0v3LLaCcbCxAov/yOSY+3wUCZIwysgAKCRBAov/yOSY+
# 39FYA/465KtY2jDt4xG6AdwZDHckfxZQWlrfhyZvtapOkUG4AprOBV2nSS/ukyD4
# V8bg2/6cLS0GRKfDsqA3DcxSASWCAggIU4fTSj+DlYOZhNUIq14qzwqciZnO5CIH
# QDczSqu2LKRdP9j6MCtzIaZq/8pPDcOlgm7Dyct/kDo/64E2sg==
# =rD4j
# -----END PGP SIGNATURE-----
# gpg: Signature made Fri 16 Jun 2023 12:00:18 PM CEST
# gpg: using RSA key B8FF1DA0D2FDCB2DA09C6C2C40A2FFF239263EDF
# gpg: Good signature from "Song Gao <m17746591750@163.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: B8FF 1DA0 D2FD CB2D A09C 6C2C 40A2 FFF2 3926 3EDF
* tag 'pull-loongarch-20230616' of https://gitlab.com/gaosong/qemu:
target/loongarch: Fix CSR.DMW0-3.VSEG check
hw/loongarch: Supplement cpu topology arguments
hw/loongarch: Add numa support
hw/intc: Set physical cpuid route for LoongArch ipi device
hw/loongarch/virt: Add cpu arch_id support
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The previous code checks whether the highest 16 bits of virtual address
equal to that of CSR.DMW0-3. This is incorrect according to the spec,
and is corrected to compare only the highest four bits instead.
Signed-off-by: Jiajie Chen <c@jia.je>
Reviewed-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20230614065556.2397513-1-c@jia.je>
Signed-off-by: Song Gao <gaosong@loongson.cn>
LoongArch ipi device uses physical cpuid to route to different
vcpus rather logical cpuid, and the physical cpuid is the same
with cpuid in acpi dsdt and srat table.
Reviewed-by: Song Gao <gaosong@loongson.cn>
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20230613120552.2471420-3-zhaotianrui@loongson.cn>
Cortex A7 CPUs with an FPU implementing VFPv4 without NEON support
have 16 64-bit FPU registers and not 32 registers. Let users set the
number of VFP registers with a CPU property.
The primary use case of this property is for the Cortex A7 of the
Aspeed AST2600 SoC.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Joel Stanley <joel@jms.id.au>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
* Skip Vector set tail when vta is zero
* Move zc* out of the experimental properties
* Mask the implicitly enabled extensions in isa_string based on priv version
* Rework CPU extension validation and validate MISA changes
* Fixup PMP TLB cacheing errors
* Writing to pmpaddr and MML/MMWP correctly triggers TLB flushes
* Fixup PMP bypass checks
* Deny access if access is partially inside a PMP entry
* Correct OpenTitanState parent type/size
* Fix QEMU crash when NUMA nodes exceed available CPUs
* Fix pointer mask transformation for vector address
* Updates and improvements for Smstateen
* Support disas for Zcm* extensions
* Support disas for Z*inx extensions
* Remove unused decomp_rv32/64 value for vector instructions
* Enable PC-relative translation
* Assume M-mode FW in pflash0 only when "-bios none"
* Support using pflash via -blockdev option
* Add vector registers to log
* Clean up reference of Vector MTYPE
* Remove the check for extra Vector tail elements
* Smepmp: Return error when access permission not allowed in PMP
* Fixes for smsiaddrcfg and smsiaddrcfgh in AIA
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEaukCtqfKh31tZZKWr3yVEwxTgBMFAmSJFRoACgkQr3yVEwxT
gBMUkg/8Cuhqpx+zy7MeouVkyhEjUuhtCWyr0WVZBJzDkVEOrlY6TyR0hb5/o1Js
LZf6ZMF6JQDN78bmUct8yFBZBGafey5tyonDCsnD7CNQuLPf2NSjTHhu9n5hKFqF
F8Mpn9iFu6k1pr0iF7FbCccVWuDb3P4h2PaM0iFhmf4uz42BCMYdgJThhvv38xlt
jr6A3dcjTpp8yB+iRCuhL2IU2XVee0XBiDUECqRXd0gmtOtqJNST8L+l8YkLy1VO
WUMe8RCO6NMP7BLJ383WwCDeiFTo0mJebZQ0eR/G1xEhy7c8BBMh/CgQmq2F3wDZ
Q0biaeozADgAaCC7aOAHI+1sAoMhOm1v2WhIVmh+XXUqT9856cKwc7DUPBmzb9Sj
N5Zh+t9WCnZG7qpfxvkDF0Y/aRODMHZ1BW5L/ky9yBtyuRwXOJ6VycZTFyRkSwnN
Gd/s9IClDOP1IP5s4TSMGGdelk4lH97x7fZE/2hxn59lp761JtMxbaEceBtqaBh8
zNMTNN/KHs8LeiIBI2ZZ+nQav452Y6XYBivQ7OdsI8xkjnjG9gfgXXjvX1TIh0ow
Hy5ZxtAtjXty49Gmjkx5VcBx4auJcnRDlLTzoZjTxq1te+gEWpw6O1EsEKasVLZe
uN6PxTOxS3nHvRvPgQc1xNUdhDRqBaYsju6b9YmMxz1uefAjGM0=
=fOTc
-----END PGP SIGNATURE-----
Merge tag 'pull-riscv-to-apply-20230614' of https://github.com/alistair23/qemu into staging
Second RISC-V PR for 8.1
* Skip Vector set tail when vta is zero
* Move zc* out of the experimental properties
* Mask the implicitly enabled extensions in isa_string based on priv version
* Rework CPU extension validation and validate MISA changes
* Fixup PMP TLB cacheing errors
* Writing to pmpaddr and MML/MMWP correctly triggers TLB flushes
* Fixup PMP bypass checks
* Deny access if access is partially inside a PMP entry
* Correct OpenTitanState parent type/size
* Fix QEMU crash when NUMA nodes exceed available CPUs
* Fix pointer mask transformation for vector address
* Updates and improvements for Smstateen
* Support disas for Zcm* extensions
* Support disas for Z*inx extensions
* Remove unused decomp_rv32/64 value for vector instructions
* Enable PC-relative translation
* Assume M-mode FW in pflash0 only when "-bios none"
* Support using pflash via -blockdev option
* Add vector registers to log
* Clean up reference of Vector MTYPE
* Remove the check for extra Vector tail elements
* Smepmp: Return error when access permission not allowed in PMP
* Fixes for smsiaddrcfg and smsiaddrcfgh in AIA
# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEEaukCtqfKh31tZZKWr3yVEwxTgBMFAmSJFRoACgkQr3yVEwxT
# gBMUkg/8Cuhqpx+zy7MeouVkyhEjUuhtCWyr0WVZBJzDkVEOrlY6TyR0hb5/o1Js
# LZf6ZMF6JQDN78bmUct8yFBZBGafey5tyonDCsnD7CNQuLPf2NSjTHhu9n5hKFqF
# F8Mpn9iFu6k1pr0iF7FbCccVWuDb3P4h2PaM0iFhmf4uz42BCMYdgJThhvv38xlt
# jr6A3dcjTpp8yB+iRCuhL2IU2XVee0XBiDUECqRXd0gmtOtqJNST8L+l8YkLy1VO
# WUMe8RCO6NMP7BLJ383WwCDeiFTo0mJebZQ0eR/G1xEhy7c8BBMh/CgQmq2F3wDZ
# Q0biaeozADgAaCC7aOAHI+1sAoMhOm1v2WhIVmh+XXUqT9856cKwc7DUPBmzb9Sj
# N5Zh+t9WCnZG7qpfxvkDF0Y/aRODMHZ1BW5L/ky9yBtyuRwXOJ6VycZTFyRkSwnN
# Gd/s9IClDOP1IP5s4TSMGGdelk4lH97x7fZE/2hxn59lp761JtMxbaEceBtqaBh8
# zNMTNN/KHs8LeiIBI2ZZ+nQav452Y6XYBivQ7OdsI8xkjnjG9gfgXXjvX1TIh0ow
# Hy5ZxtAtjXty49Gmjkx5VcBx4auJcnRDlLTzoZjTxq1te+gEWpw6O1EsEKasVLZe
# uN6PxTOxS3nHvRvPgQc1xNUdhDRqBaYsju6b9YmMxz1uefAjGM0=
# =fOTc
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 14 Jun 2023 03:17:14 AM CEST
# gpg: using RSA key 6AE902B6A7CA877D6D659296AF7C95130C538013
# gpg: Good signature from "Alistair Francis <alistair@alistair23.me>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 6AE9 02B6 A7CA 877D 6D65 9296 AF7C 9513 0C53 8013
* tag 'pull-riscv-to-apply-20230614' of https://github.com/alistair23/qemu: (60 commits)
hw/intc: If mmsiaddrcfgh.L == 1, smsiaddrcfg and smsiaddrcfgh are read-only.
target/riscv: Smepmp: Return error when access permission not allowed in PMP
target/riscv/vector_helper.c: Remove the check for extra tail elements
target/riscv/vector_helper.c: clean up reference of MTYPE
target/riscv: Fix initialized value for cur_pmmask
util/log: Add vector registers to log
docs/system: riscv: Add pflash usage details
riscv/virt: Support using pflash via -blockdev option
hw/riscv: virt: Assume M-mode FW in pflash0 only when "-bios none"
target/riscv: Remove pc_succ_insn from DisasContext
target/riscv: Enable PC-relative translation
target/riscv: Use true diff for gen_pc_plus_diff
target/riscv: Change gen_set_pc_imm to gen_update_pc
target/riscv: Change gen_goto_tb to work on displacements
target/riscv: Introduce cur_insn_len into DisasContext
target/riscv: Fix target address to update badaddr
disas/riscv.c: Remove redundant parentheses
disas/riscv.c: Fix lines with over 80 characters
disas/riscv.c: Remove unused decomp_rv32/64 value for vector instructions
disas/riscv.c: Support disas for Z*inx extensions
...
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Since commit 139c1837db ("meson: rename included C source files
to .c.inc"), QEMU standard procedure for included C files is to
use *.c.inc.
Besides, since commit 6a0057aa22 ("docs/devel: make a statement
about includes") this is documented as the Coding Style:
If you do use template header files they should be named with
the ``.c.inc`` or ``.h.inc`` suffix to make it clear they are
being included for expansion.
Therefore move the included templates in the tcg/ directory and
rename as '.h.inc'.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230608133108.72655-5-philmd@linaro.org>
Move the #ifdef'ry inside do_cpu_init() instead of
declaring an empty stub for user emulation.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230602224628.59546-3-philmd@linaro.org>
Since commit 604664726f ("target/i386: Restrict cpu_exec_interrupt()
handler to sysemu"), do_cpu_sipi() isn't called anymore on user
emulation. Remove the now pointless stub.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230602224628.59546-2-philmd@linaro.org>
int_helper.c only contains system emulation code:
remove the #ifdef'ry and move the file to the meson
softmmu source set.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230602223016.58647-1-philmd@linaro.org>
On an address match, skip checking for default permissions and return error
based on access defined in PMP configuration.
v3 Changes:
o Removed explicit return of boolean value from comparision
of priv/allowed_priv
v2 Changes:
o Removed goto to return in place when address matches
o Call pmp_hart_has_privs_default at the end of the loop
Fixes: 90b1fafce0 ("target/riscv: Smepmp: Skip applying default rules when address matches")
Signed-off-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Message-Id: <20230605164548.715336-1-hchauhan@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Commit 752614cab8 ("target/riscv: rvv: Add tail agnostic for vector
load / store instructions") added an extra check for LMUL fragmentation,
intended for setting the "rest tail elements" in the last register for a
segment load insn.
Actually, the max_elements derived in vext_ld*() won't be a fraction of
vector register size, since the lmul encoded in desc is emul, which has
already been adjusted to 1 for LMUL fragmentation case by vext_get_emul()
in trans_rvv.c.inc, for ld_stride(), ld_us(), ld_index() and ldff().
Besides, vext_get_emul() has also taken EEW/SEW into consideration, so no
need to call vext_get_total_elems() which would base on the emul to derive
another emul, the second emul would be incorrect when esz differs from sew.
Thus this patch removes the check for extra tail elements.
Fixes: 752614cab8 ("target/riscv: rvv: Add tail agnostic for vector load / store instructions")
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Message-Id: <20230607091646.4049428-1-xiao.w.wang@intel.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
There's no code using MTYPE, which was a concept used in older vector
implementation.
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Message-Id: <20230608053517.4102648-1-xiao.w.wang@intel.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
We initialize cur_pmmask as -1(UINT32_MAX/UINT64_MAX) and regard it
as if pointer mask is disabled in current implementation. However,
the addresses for vector load/store will be adjusted to zero in this
case and -1(UINT32_MAX/UINT64_MAX) is valid value for pmmask when
pointer mask is enabled.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Message-Id: <20230610094651.43786-1-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
pc_succ_insn is no longer useful after the introduce of cur_insn_len
and all pc related value use diff value instead of absolute value.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230526072124.298466-8-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Add a base pc_save for PC-relative translation(CF_PCREL).
Diable the directly sync pc from tb by riscv_cpu_synchronize_from_tb.
Use gen_pc_plus_diff to get the pc-relative address.
Enable CF_PCREL in System mode.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230526072124.298466-7-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reduce reliance on absolute values by using true pc difference for
gen_pc_plus_diff() to prepare for PC-relative translation.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230526072124.298466-6-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reduce reliance on absolute values(by passing pc difference) to
prepare for PC-relative translation.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230526072124.298466-5-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reduce reliance on absolute value to prepare for PC-relative translation.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230526072124.298466-4-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Use cur_insn_len to store the length of the current instruction to
prepare for PC-relative translation.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230526072124.298466-3-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Compute the target address before storing it into badaddr
when mis-aligned exception is triggered.
Use a target_pc temp to store the target address to avoid
the confusing operation that udpate target address into
cpu_pc before misalign check, then update it into badaddr
and restore cpu_pc to current pc if exception is triggered.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230526072124.298466-2-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Pass RISCVCPUConfig as disassemble_info.target_info to support disas
of conflict instructions related to specific extensions.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230523093539.203909-4-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Split RISCVCPUConfig declarations to prepare for passing it to disas.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-Id: <20230523093539.203909-3-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Add knobs to allow users to enable smstateen and also export it via the
ISA extension string.
Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Reviewed-by: Weiwei Li<liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230518175058.2772506-4-mchitale@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
When misa.F is 0 tb->flags.FS field is unused and can be used to save
the current state of smstateen0.FCSR check which is needed by the
floating point translation routines.
Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Message-Id: <20230518175058.2772506-3-mchitale@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Implement the s/h/mstateen.fcsr bit as defined in the smstateen spec
and check for it when accessing the fcsr register and its fields.
Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230518175058.2772506-2-mchitale@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
write_mstatus() can only change current xl when in debug mode.
And we need update cur_pmmask/base in this case.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Message-Id: <20230524015933.17349-3-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Access will fail if access is partially inside the PMP entry.
However,only setting ret = false doesn't really mean pmp violation
since pmp_hart_has_privs_default() may return true at the end of
pmp_hart_has_privs().
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517091519.34439-13-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Use pmp_update_rule_addr() and pmp_update_rule_nums() separately to
update rule nums only once for each pmpcfg_csr_write. Then remove
pmp_update_rule() since it become unused.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517091519.34439-12-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
TLB needn't be flushed when pmpcfg/pmpaddr don't changes.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Message-Id: <20230517091519.34439-11-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
TLB should be flushed not only for pmpcfg csr changes, but also for
pmpaddr csr changes.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Message-Id: <20230517091519.34439-10-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Currently only the rule addr of the same index of pmpaddr is updated
when pmpaddr CSR is modified. However, the rule addr of next PMP entry
may also be affected if its A field is PMP_AMATCH_TOR. So we should
also update it in this case.
Write to pmpaddr CSR will not affect the rule nums, So we needn't update
call pmp_update_rule_nums() in pmpaddr_csr_write().
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517091519.34439-9-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
MMWP and MML bits may affect the allowed privs of PMP entries and the
default privs, both of which may change the allowed privs of exsited
TLB entries. So we need flush TLB when they are changed.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517091519.34439-8-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The addr and size parameters in pmp_hart_has_privs_default() are unused.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517091519.34439-7-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
RLB/MML/MMWP bits in mseccfg CSR are introduced by Smepmp extension.
So they can only be writable and set to 1s when cfg.epmp is true.
Then we also need't check on epmp in pmp_hart_has_privs_default().
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517091519.34439-6-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
We no longer need the pmp_index for matched PMP entry now.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517091519.34439-5-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Return the result directly for short cut, since We needn't do the
following check on the PMP entries if there is no PMP rules.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517091519.34439-4-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
pmp_get_tlb_size can be separated from get_physical_address_pmp and is only
needed when ret == TRANSLATE_SUCCESS.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517091519.34439-3-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
PMP entries before (including) the matched PMP entry may only cover partial
of the TLB page, and this may split the page into regions with different
permissions. Such as for PMP0 (0x80000008~0x8000000F, R) and PMP1 (0x80000000~
0x80000FFF, RWX), write access to 0x80000000 will match PMP1. However we cannot
cache the translation result in the TLB since this will make the write access
to 0x80000008 bypass the check of PMP0. So we should check all of them instead
of the matched PMP entry in pmp_get_tlb_size() and set the tlb_size to 1 in
this case.
Set tlb_size to TARGET_PAGE_SIZE if PMP is not support or there is no PMP rules.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517091519.34439-2-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
write_misa() must use as much common logic as possible. We want to open
code just the bits that are exclusive to the CSR write operation and TCG
internals.
Our validation is done with riscv_cpu_validate_set_extensions(), but we
need a small tweak first. When enabling RVG we're doing:
env->misa_ext |= RVI | RVM | RVA | RVF | RVD;
env->misa_ext_mask = env->misa_ext;
This works fine for realize() time but this can potentially overwrite
env->misa_ext_mask if we reutilize the function for write_misa().
Instead of doing misa_ext_mask = misa_ext, sum up the RVG extensions in
misa_ext_mask as well. This won't change realize() time behavior
(misa_ext_mask will be == misa_ext) and will ensure that write_misa()
won't change misa_ext_mask by accident.
After that, rewrite write_misa() to work as follows:
- mask the write using misa_ext_mask to avoid enabling unsupported
extensions;
- suppress RVC if the next insn isn't aligned;
- disable RVG if any of RVG dependencies are being disabled by the user;
- assign env->misa_ext and run riscv_cpu_validate_set_extensions(). On
error, rollback env->misa_ext to its original value, logging a
GUEST_ERROR to inform the user about the failed write;
- handle RVF and MSTATUS_FS and continue as usual.
Let's keep write_misa() as experimental for now until this logic gains
enough mileage.
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-Id: <20230517135714.211809-12-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
We have 4 config settings being done in riscv_cpu_init(): ext_ifencei,
ext_icsr, mmu and pmp. This is also the constructor of the "riscv-cpu"
device, which happens to be the parent device of every RISC-V cpu.
The result is that these 4 configs are being set every time, and every
other CPU should always account for them. CPUs such as sifive_e need to
disable settings that aren't enabled simply because the parent class
happens to be enabling it.
Moving all configurations from the parent class to each CPU will
centralize the config of each CPU into its own init(), which is clearer
than having to account to whatever happens to be set in the parent
device. These settings are also being set in register_cpu_props() when
no 'misa_ext' is set, so for these CPUs we don't need changes. Named
CPUs will receive all cfgs that the parent were setting into their
init().
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517135714.211809-11-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
There is no need to init timers if we're not even sure that our
extensions are valid. Execute riscv_cpu_validate_set_extensions() before
riscv_timer_init().
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517135714.211809-10-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Let's remove more code that is open coded in riscv_cpu_realize() and put
it into a helper. Let's also add an error message instead of just
asserting out if env->misa_mxl_max != env->misa_mlx.
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517135714.211809-9-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
We're doing env->priv_spec validation and assignment at the start of
riscv_cpu_realize(), which is fine, but then we're doing a force disable
on extensions that aren't compatible with the priv version.
This second step is being done too early. The disabled extensions might be
re-enabled again in riscv_cpu_validate_set_extensions() by accident. A
better place to put this code is at the end of
riscv_cpu_validate_set_extensions() after all the validations are
completed.
Add a new helper, riscv_cpu_disable_priv_spec_isa_exts(), to disable the
extesions after the validation is done. While we're at it, create a
riscv_cpu_validate_priv_spec() helper to host all env->priv_spec related
validation to unclog riscv_cpu_realize a bit.
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517135714.211809-8-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Even though Zca/Zcf/Zcd can be included by C/F/D, however, their priv
version is higher than the priv version of C/F/D. So if we use check
for them instead of check for C/F/D totally, it will trigger new
problem when we try to disable the extensions based on the configured
priv version.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517135714.211809-7-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Using implicitly enabled extensions such as Zca/Zcf/Zcd instead of their
super extensions can simplify the extension related check. However, they
may have higher priv version than their super extensions. So we should mask
them in the isa_string based on priv version to make them invisible to user
if the specified priv version is lower than their minimal priv version.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517135714.211809-6-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
All these generic CPUs are using the latest priv available, at this
moment PRIV_VERSION_1_12_0:
- riscv_any_cpu_init()
- rv32_base_cpu_init()
- rv64_base_cpu_init()
- rv128_base_cpu_init()
Create a new PRIV_VERSION_LATEST enum and use it in those cases. I'll
make it easier to update everything at once when a new priv version is
available.
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517135714.211809-5-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The setter is doing nothing special. Just set env->priv_ver directly.
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517135714.211809-4-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
This setter is doing nothing else but setting env->vext_ver. Assign the
value directly.
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517135714.211809-3-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The RVV verification will error out if fails and it's being done at the
end of riscv_cpu_validate_set_extensions(), after we've already set some
extensions that are dependent on RVV. Let's put it in its own function
and do it earlier.
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517135714.211809-2-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Zc* extensions (version 1.0) are ratified.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Message-Id: <20230510030040.20528-1-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The function is a no-op if 'vta' is zero but we're still doing a lot of
stuff in this function regardless. vext_set_elems_1s() will ignore every
single time (since vta is zero) and we just wasted time.
Skip it altogether in this case. Aside from the code simplification
there's a noticeable emulation performance gain by doing it. For a
regular C binary that does a vectors operation like this:
=======
#define SZ 10000000
int main ()
{
int *a = malloc (SZ * sizeof (int));
int *b = malloc (SZ * sizeof (int));
int *c = malloc (SZ * sizeof (int));
for (int i = 0; i < SZ; i++)
c[i] = a[i] + b[i];
return c[SZ - 1];
}
=======
Emulating it with qemu-riscv64 and RVV takes ~0.3 sec:
$ time ~/work/qemu/build/qemu-riscv64 \
-cpu rv64,debug=false,vext_spec=v1.0,v=true,vlen=128 ./foo.out
real 0m0.303s
user 0m0.281s
sys 0m0.023s
With this skip we take ~0.275 sec:
$ time ~/work/qemu/build/qemu-riscv64 \
-cpu rv64,debug=false,vext_spec=v1.0,v=true,vlen=128 ./foo.out
real 0m0.274s
user 0m0.252s
sys 0m0.019s
This performance gain adds up fast when executing heavy benchmarks like
SPEC.
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Palmer Dabbelt <palmer@rivosinc.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Message-Id: <20230427205708.246679-2-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
This queue includes several assorted fixes for target/ppc emulation and
XIVE2. It also includes an openpic fix, an avocado fix for ppc64
binaries without slipr and a Kconfig change for MAC_NEWWORLD.
-----BEGIN PGP SIGNATURE-----
iIwEABYKADQWIQQX6/+ZI9AYAK8oOBk82cqW3gMxZAUCZIR6uhYcZGFuaWVsaGI0
MTNAZ21haWwuY29tAAoJEDzZypbeAzFksQsA/jucd+qsZ9mmJ9SYVd4umMnC/4bC
i4CHo/XcHb0DzyBXAQCLxMA+KSTkP+yKv3edra4I5K9qjTW1H+pEOWamh1lvDw==
=EezE
-----END PGP SIGNATURE-----
Merge tag 'pull-ppc-20230610' of https://gitlab.com/danielhb/qemu into staging
ppc patch queue for 2023-06-10:
This queue includes several assorted fixes for target/ppc emulation and
XIVE2. It also includes an openpic fix, an avocado fix for ppc64
binaries without slipr and a Kconfig change for MAC_NEWWORLD.
# -----BEGIN PGP SIGNATURE-----
#
# iIwEABYKADQWIQQX6/+ZI9AYAK8oOBk82cqW3gMxZAUCZIR6uhYcZGFuaWVsaGI0
# MTNAZ21haWwuY29tAAoJEDzZypbeAzFksQsA/jucd+qsZ9mmJ9SYVd4umMnC/4bC
# i4CHo/XcHb0DzyBXAQCLxMA+KSTkP+yKv3edra4I5K9qjTW1H+pEOWamh1lvDw==
# =EezE
# -----END PGP SIGNATURE-----
# gpg: Signature made Sat 10 Jun 2023 06:29:30 AM PDT
# gpg: using EDDSA key 17EBFF9923D01800AF2838193CD9CA96DE033164
# gpg: issuer "danielhb413@gmail.com"
# gpg: Good signature from "Daniel Henrique Barboza <danielhb413@gmail.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 17EB FF99 23D0 1800 AF28 3819 3CD9 CA96 DE03 3164
* tag 'pull-ppc-20230610' of https://gitlab.com/danielhb/qemu: (29 commits)
hw/ppc/Kconfig: MAC_NEWWORLD should always select USB_OHCI_PCI
target/ppc: Implement gathering irq statistics
tests/avocado/tuxrun_baselines: Fix ppc64 tests for binaries without slirp
hw/ppc/openpic: Do not open-code ROUND_UP() macro
target/ppc: Decrementer fix BookE semantics
target/ppc: Fix decrementer time underflow and infinite timer loop
target/ppc: Rework store conditional to avoid branch
target/ppc: Remove larx/stcx. memory barrier semantics
target/ppc: Ensure stcx size matches larx
target/ppc: Fix lqarx to set cpu_reserve
target/ppc: Eliminate goto in mmubooke_check_tlb()
target/ppc: Change ppcemb_tlb_check() to return bool
target/ppc: Simplify ppcemb_tlb_search()
target/ppc: Remove some unneded line breaks
target/ppc: Move ppcemb_tlb_search() to mmu_common.c
target/ppc: Remove "ext" parameter of ppcemb_tlb_check()
target/ppc: Remove single use function
target/ppc: PMU implement PERFM interrupts
target/ppc: Support directed privileged doorbell interrupt (SDOOR)
target/ppc: Fix msgclrp interrupt type
...
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>