No need for this helper to access the CPUState::accel field.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230624174121.11508-12-philmd@linaro.org>
We want all accelerators to share the same opaque pointer in
CPUState. Rename NVMM 'qemu_vcpu' as 'AccelCPUState'; directly
use the typedef, remove unnecessary casts.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230624174121.11508-11-philmd@linaro.org>
hThread variable is only used by the HAX accelerator,
so move it to the accelerator specific context.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230624174121.11508-9-philmd@linaro.org>
We want all accelerators to share the same opaque pointer in
CPUState. Start with the HAX context, renaming its forward
declarated structure 'hax_vcpu_state' as 'AccelCPUState'.
Document the CPUState field. Directly use the typedef.
Remove the amusing but now unnecessary casts in NVMM / WHPX.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230624174121.11508-8-philmd@linaro.org>
All accelerators will share a single opaque context
in CPUState. Start by renaming 'hax_vcpu' as 'accel'.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230624174121.11508-7-philmd@linaro.org>
When the vCPU thread finished its processing, destroy
it and signal its destruction to generic vCPU management
layer.
Add a sanity check for the vCPU accelerator context.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230624174121.11508-6-philmd@linaro.org>
hThread is only used on the error path in hax_kick_vcpu_thread().
Fixes: b0cb0a66d6 ("Plumb the HAXM-based hardware acceleration support")
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230624174121.11508-5-philmd@linaro.org>
On Windows hosts, cpu->hThread is assigned but never accessed:
remove it.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230624174121.11508-4-philmd@linaro.org>
Since MinGW commit 395dcfdea ("rename hyper-v headers and def
files to lower case") [*], WinHvPlatform.h and WinHvEmulation.h
got respectively renamed as winhvplatform.h / winhvemulation.h.
The mingw64-headers package included in the Fedora version we
use for CI does include this commit; and meson fails to detect
these present-but-renamed headers while cross-building (on
case-sensitive filesystems).
Use the renamed header in order to detect and successfully
cross-build with the WHPX accelerator.
Note, on Windows hosts, the libraries are still named as
WinHvPlatform.dll and WinHvEmulation.dll, so we don't bother
renaming the definitions used by load_whp_dispatch_fns() in
target/i386/whpx/whpx-all.c.
[*] https://sourceforge.net/p/mingw-w64/mingw-w64/ci/395dcfdea
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20230624142211.8888-3-philmd@linaro.org>
We incorporate %asi into tb->flags so that we may generate
inline code for the many ASIs for which it is easy to do so.
Setting %asi is common for e.g. memcpy and memset performing
block copy and clear, so it is worth noticing this case.
We must end the TB but do not need to return to the main loop.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230628071202.230991-9-richard.henderson@linaro.org>
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
After the register window unwind, this is for a plain indirect
branch with no further side effects.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230628071202.230991-8-richard.henderson@linaro.org>
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
This is for a plain indirect branch with no other side effects.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230628071202.230991-7-richard.henderson@linaro.org>
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
When resolving JUMP_PC, we know this is for a plain branch
with no other side effects.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230628071202.230991-6-richard.henderson@linaro.org>
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Create a new artificial "next pc" which also indicates
that nothing has changed within the cpu state which
requires returning to the main loop.
Pipe this new value though all pc/npc checks.
Do not produce this new value yet.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230628071202.230991-5-richard.henderson@linaro.org>
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Let the compiler decide about inlining.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230628071202.230991-4-richard.henderson@linaro.org>
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
During translation, npc == address, DYNAMIC_PC, or JUMP_PC.
It is only the encoding between here and sparc_restore_state_to_opc
that considers JUMP_PC to be a bit within a larger value.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230628071202.230991-3-richard.henderson@linaro.org>
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230628071202.230991-2-richard.henderson@linaro.org>
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
The microblaze architecture does not reorder instructions.
While there is an MBAR wait-for-data-access instruction,
this concerns synchronizing with DMA.
This should have been defined when enabling MTTCG.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar@zeroasic.com>
Fixes: d449561b13 ("configure: microblaze: Enable mttcg")
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230621135633.1649-4-anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* Validate cluster and NUMA node boundary on ARM and RISC-V
* various small TCG features from newer processors
* Remove dubious 'event_notifier-posix.c' include
* fix git-submodule.sh in releases
-----BEGIN PGP SIGNATURE-----
iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmSZS0IUHHBib256aW5p
QHJlZGhhdC5jb20ACgkQv/vSX3jHroN+tgf/axJdG9NXKCyXgc0vzjKVhSR4Y+tC
EPxkg7Rq7uOMgbph9oTS/2Kzh9LnP6kLt2qnS4igRHGuEBd58yD6fFNDv0LJsK/l
B/d0WGHMKV0KMYOX24rkyfohVu37GhVRsiVSIlIiQVTC9JtYer7WxdnyoDaPKvY8
dpbKgDrd59vAlsHrpj7ZubVQPcL3lXrLryimpDohMH6Ba+4wZq+7dKPpal97QOP2
3i7isUBTQiMOcVjW6GEiNcDLSJqj5DSgylhdFnaBsq/ThpC2PxWoXcCbV28QELzf
5+J+RXQavmeWKZMR0q98iBzWbrsVtaSxAkHHiwbUMMqQvkfY6Dpo5dMHWw==
=WHE2
-----END PGP SIGNATURE-----
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
* kvm: reuse per-vcpu stats fd to avoid vcpu interruption
* Validate cluster and NUMA node boundary on ARM and RISC-V
* various small TCG features from newer processors
* Remove dubious 'event_notifier-posix.c' include
* fix git-submodule.sh in releases
# -----BEGIN PGP SIGNATURE-----
#
# iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmSZS0IUHHBib256aW5p
# QHJlZGhhdC5jb20ACgkQv/vSX3jHroN+tgf/axJdG9NXKCyXgc0vzjKVhSR4Y+tC
# EPxkg7Rq7uOMgbph9oTS/2Kzh9LnP6kLt2qnS4igRHGuEBd58yD6fFNDv0LJsK/l
# B/d0WGHMKV0KMYOX24rkyfohVu37GhVRsiVSIlIiQVTC9JtYer7WxdnyoDaPKvY8
# dpbKgDrd59vAlsHrpj7ZubVQPcL3lXrLryimpDohMH6Ba+4wZq+7dKPpal97QOP2
# 3i7isUBTQiMOcVjW6GEiNcDLSJqj5DSgylhdFnaBsq/ThpC2PxWoXcCbV28QELzf
# 5+J+RXQavmeWKZMR0q98iBzWbrsVtaSxAkHHiwbUMMqQvkfY6Dpo5dMHWw==
# =WHE2
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 26 Jun 2023 10:24:34 AM CEST
# gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg: issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [undefined]
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [undefined]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* tag 'for-upstream' of https://gitlab.com/bonzini/qemu:
git-submodule.sh: allow running in validate mode without previous update
target/i386: implement SYSCALL/SYSRET in 32-bit emulators
target/i386: implement RDPID in TCG
target/i386: sysret and sysexit are privileged
target/i386: AMD only supports SYSENTER/SYSEXIT in 32-bit mode
target/i386: Intel only supports SYSCALL/SYSRET in long mode
target/i386: TCG supports WBNOINVD
target/i386: TCG supports XSAVEERPTR
target/i386: do not accept RDSEED if CPUID bit absent
target/i386: TCG supports RDSEED
target/i386: TCG supports 3DNow! prefetch(w)
target/i386: fix INVD vmexit
kvm: reuse per-vcpu stats fd to avoid vcpu interruption
hw/riscv: Validate cluster and NUMA node boundary
hw/arm: Validate cluster and NUMA node boundary
numa: Validate cluster and NUMA node boundary if required
hw/remote/proxy: Remove dubious 'event_notifier-posix.c' include
build: further refine build.ninja rules
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
AMD supports both 32-bit and 64-bit SYSCALL/SYSRET, but the TCG only
exposes it for 64-bit targets. For system emulation just reuse the
helper; for user-mode emulation the ABI is the same as "int $80".
The BSDs does not support any fast system call mechanism in 32-bit
mode so add to bsd-user the same stub that FreeBSD has for 64-bit
compatibility mode.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
RDPID corresponds to a RDMSR(TSC_AUX); however, it is unprivileged
so for user-mode emulation we must provide the value that the kernel
places in the MSR. For Linux, it is a combination of the current CPU
and the current NUMA node, both of which can be retrieved with getcpu(2).
Also try sched_getcpu(), which might be there on the BSDs. If there is
no portable way to retrieve the current CPU id from userspace, return 0.
RDTSCP is reimplemented as RDTSC + RDPID ECX; the differences in terms
of serializability are not relevant to QEMU.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
WBNOINVD is the same as INVD or WBINVD as far as TCG is concerned,
since there is no cache in TCG and therefore no invalidation side effect
in WBNOINVD.
With respect to SVM emulation, processors that do not support WBNOINVD
will ignore the prefix and treat it as WBINVD, while those that support
it will generate exactly the same vmexit.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
XSAVEERPTR is actually a fix for an errata; TCG does not have the issue.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
TCG implements RDSEED, and in fact uses qcrypto_random_bytes which is
secure enough to match hardware behavior. Expose it to guests.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The AMD prefetch(w) instructions have not been deprecated together with the rest
of 3DNow!, and in fact are even supported by newer Intel processor. Mark them
as supported by TCG, as it supports all of 3DNow!.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Due to a typo or perhaps a brain fart, the INVD vmexit was never generated.
Fix it (but not that fixing just the typo would break both INVD and WBINVD,
due to a case of two wrongs making a right).
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Doorbells in SMT need to coordinate msgsnd/msgclr and DPDES access from
multiple threads that affect the same state.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
A relatively simple case to begin with, CTRL is a SMT shared register
where reads and writes need to synchronise against state changes by
other threads in the core.
Atomic serialisation operations are used to achieve this.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
TGC SMT emulation needs to know whether it is running with SMT siblings,
to be able to iterate over siblings in a core, and to serialise
threads to access per-core shared SPRs. Add infrastructure to do these
things.
For now the sibling iteration and serialisation are implemented in a
simple but inefficient way. SMT shared state and sibling access is not
too common, and SMT configurations are mainly useful to test system
code, so performance is not to critical.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
[ clg: fix build breakage with clang ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The top bits of the LEV field of the sc instruction are to be treated as
as a reserved field rather than a reserved value, meaning LEV is
effectively the bottom bit. LEV=0xF should be treated as LEV=1 and be
a hypercall, for example.
This changes the instruction execution to just set lev from the low bit
of the field. Processors which don't support the LEV field will continue
to ignore it.
ISA v3.1 defines LEV to be 2 bits, in order to add the 'sc 2' ultracall
instruction. TCG does not support Ultravisor, so don't worry about
that bit.
Suggested-by: "Harsh Prateek Bora" <harshpb@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The CTRL register is able to write the bit in the RUN field, which gets
reflected into the TS field which is read-only and contains the state of
the RUN field for all threads in the core.
TCG does not implement SMT, so the correct implementation just requires
mirroring the RUN bit into the first bit of the TS field.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
System call interrupts in ISA v3.1 CPUs add a LEV indication in SRR1
that corresponds with the LEV field of the instruction that caused the
interrupt.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The hypervisor emulation assistance interrupt modifies HEIR to
contain the value of the instruction which caused the exception.
Only TCG raises HEAI interrupts so this can be made TCG-only.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
ISA v3.1 introduced prefix instructions. Among the changes, various
synchronous interrupts report whether they were caused by a prefix
instruction in (H)SRR1.
The case of instruction fetch that causes an HDSI due to access of a
process-scoped table faulting on the partition scoped translation is the
tricky one. As with ISIs and HISIs, this does not try to set the prefix
bit because there is no instruction image to be loaded. The HDSI needs
the originating access type to be passed through to the handler to
distinguish this from HDSIs that fault translating process scoped tables
originating from a load or store instruction (in that case the prefix
bit should be provided).
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[ clg: checkpatch issues ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Rather than always performing partition scope page table translation
with access type of 0 (MMU_DATA_LOAD), pass through the processor
access type which first initiated the translation sequence. Process-
scoped page table loads are then set to MMU_DATA_LOAD access type in
the xlate function.
This will allow more information to be passed to the exception
handler in the next patch.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
powerpc ifetch endianness depends on MSR[LE] so it has to byteswap
after cpu_ldl_code(). This corrects DSISR bits in alignment
interrupts when running in little endian mode.
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
When the Timer Control and Timer Status registers are modified, avoid
calling the KVM backend when not available
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Make sure each CPU gets its state set up for gdb, not just the ones
before PowerPCCPUClass has had its gdb state set up.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Brown bag time: store instead of load results in uninitialized temp.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1704
Reported-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620134659.817559-1-richard.henderson@linaro.org
Fixes: e6dd5e782b ("target/arm: Use tcg_gen_qemu_{ld, st}_i128 in gen_sve_{ld, st}r")
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
One cannot test for feature aa32_simd_r32 without first
testing if AArch32 mode is supported at all. This leads to
qemu-system-aarch64: ARM CPUs must have both VFP-D32 and Neon or neither
for Apple M1 cpus.
We already have a check for ARMv8-A never setting vfp-d32 true,
so restructure the code so that AArch64 avoids the test entirely.
Reported-by: Mads Ynddal <mads@ynddal.dk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Mads Ynddal <m.ynddal@samsung.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Mads Ynddal <m.ynddal@samsung.com>
Message-id: 20230619140216.402530-1-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add an x-rme cpu property to enable FEAT_RME.
Add an x-l0gptsz property to set GPCCR_EL3.L0GPTSZ,
for testing various possible configurations.
We're not currently completely sure whether FEAT_RME will
be OK to enable purely as a CPU-level property, or if it will
need board co-operation, so we're making these experimental
x- properties, so that the people developing the system
level software for RME can try to start using this and let
us know how it goes. The command line syntax for enabling
this will change in future, without backwards-compatibility.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-21-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Place the check at the end of get_phys_addr_with_struct,
so that we check all physical results.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-20-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Handle GPC Fault types in arm_deliver_fault, reporting as
either a GPC exception at EL3, or falling through to insn
or data aborts at various exception levels.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-19-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The function takes the fields as filled in by
the Arm ARM pseudocode for TakeGPCException.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-18-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This fixes a bug in which we failed to initialize
the result attributes properly after the memset.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-17-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Instead of passing this to get_phys_addr_lpae, stash it
in the S1Translate structure.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-16-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Do not provide a fast-path for physical addresses,
as those will need to be validated for GPC.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
While Root and Realm may read and write data from other spaces,
neither may execute from other pa spaces.
This happens for Stage1 EL3, EL2, EL2&0, and Stage2 EL1&0.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
With Realm security state, bit 55 of a block or page descriptor during
the stage2 walk becomes the NS bit; during the stage1 walk the bit 5
NS bit is RES0. With Root security state, bit 11 of the block or page
descriptor during the stage1 walk becomes the NSE bit.
Rather than collecting an NS bit and applying it later, compute the
output pa space from the input pa space and unconditionally assign.
This means that we no longer need to adjust the output space earlier
for the NSTable bit.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Test in_space instead of in_secure so that we don't
switch out of Root space.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add input and output space members to S1Translate. Set and adjust
them in S1_ptw_translate, and the various points at which we drop
secure state. Initialize the space in get_phys_addr; for now leave
get_phys_addr_with_secure considering only secure vs non-secure spaces.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This was added in 7e98e21c09 as part of a reorg in which
one of the argument had been legally NULL, and this caught
actual instances. Now that the reorg is complete, this
serves little purpose.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
With FEAT_RME, there are four physical address spaces.
For now, just define the symbols, and mention them in
the same spots as the other Phys indexes in ptw.c.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
It will be helpful to have ARMMMUIdx_Phys_* to be in the same
relative order as ARMSecuritySpace enumerators. This requires
the adjustment to the nstable check. While there, check for being
in secure state rather than rely on clearing the low bit making
no change to non-secure state.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Introduce both the enumeration and functions to retrieve
the current state, and state outside of EL3.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This includes GPCCR, GPTBR, MFAR, the TLB flush insns PAALL, PAALLOS,
RPALOS, RPAOS, and the cache flush insns CIPAPA and CIGDPAPA.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
With RME, SEL2 must also be present to support secure state.
The NS bit is RES1 if SEL2 is not present.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Define the missing SCR and HCR bits, allow SCR_NSE and {SCR,HCR}_GPF
to be set, and invalidate TLBs when NSE changes.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add the missing field for ID_AA64PFR0, and the predicate.
Disable it if EL3 is forced off by the board or command-line.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230620124418.805717-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
from ISA v1.6.1 onwards the bit position of ICR.IE changed.
ctx->icr_ie_offset contains the correct value for the ISA version used
by the vCPU. We also need to exit this tb here, as we might have enabled
interrupts.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Message-Id: <20230621142302.1648383-9-kbastian@mail.uni-paderborn.de>
the CPU can change the privilege level by writing the corresponding bits
in PSW. If this happens all instructions after this 'mtcr' in the TB are
translated with the wrong privilege level. So we have to exit to the
cpu_loop() and start translating again with the new privilege level.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Message-Id: <20230621142302.1648383-8-kbastian@mail.uni-paderborn.de>
so we can recognize exceptions after re-enabling interrupts.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reported-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Message-Id: <20230621142302.1648383-4-kbastian@mail.uni-paderborn.de>
this replaces all calls to tcg_gen_exit_tb() and moves them to
tricore_tb_stop().
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Message-Id: <20230621142302.1648383-3-kbastian@mail.uni-paderborn.de>
if A[r1] == A[11], then we would overwrite the destination address of
the jump with the return address.
Reported-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Message-Id: <20230621142302.1648383-2-kbastian@mail.uni-paderborn.de>
We are always taking the TRICORE_FEATURE_13 branch as every CPU has TRICORE_FEATURE_13.
For CPUs with ISA > 1.3 we have to take the else branch.
We fix this by inverting the condition. We check for
TRICORE_FEATURE_131, which every CPU except TRICORE_FEATURE_13 CPUs
have.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1700
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Message-Id: <20230612113245.56667-5-kbastian@mail.uni-paderborn.de>
some insns were not checking if an even index was used to access a 64
bit register. In the worst case that could lead to a buffer overflow as
reported in https://gitlab.com/qemu-project/qemu/-/issues/1698.
Reported-by: Siqi Chen <coc.cyqh@gmail.com>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Message-Id: <20230612113245.56667-4-kbastian@mail.uni-paderborn.de>
we don't want to save PSW.CDC to the CSA, but PSW.CDE must be saved.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1699
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Message-Id: <20230612113245.56667-3-kbastian@mail.uni-paderborn.de>
When translating "imask" instruction of Tricore architecture, QEMU did not check whether the register index was out of bounds, resulting in a global-buffer-overflow.
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1698
Reported-by: Siqi Chen <coc.cyqh@gmail.com>
Signed-off-by: Siqi Chen <coc.cyqh@gmail.com>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Message-Id: <20230612065633.149152-1-coc.cyqh@gmail.com>
Message-Id: <20230612113245.56667-2-kbastian@mail.uni-paderborn.de>
this variant saves the 'IE' bit to a 'd' register. The 'IE' bitfield
changed from ISA version 1.6.1, so we add icr_ie_offset to DisasContext
as with the other DISABLE insn.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Message-Id: <20230614100039.1337971-9-kbastian@mail.uni-paderborn.de>
we also introduce the tc37x CPU that implements that ISA version.
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Message-Id: <20230614100039.1337971-2-kbastian@mail.uni-paderborn.de>
We use the user_ss[] array to hold the user emulation sources,
and the softmmu_ss[] array to hold the system emulation ones.
Hold the latter in the 'system_ss[]' array for parity with user
emulation.
Mechanical change doing:
$ sed -i -e s/softmmu_ss/system_ss/g $(git grep -l softmmu_ss)
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230613133347.82210-10-philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Since we *might* have user emulation with softmmu,
use the clearer 'CONFIG_SYSTEM_ONLY' key to check
for system emulation.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230613133347.82210-9-philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Since we *might* have user emulation with softmmu,
replace the system emulation check by !user emulation one.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20230613133347.82210-5-philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Since we *might* have user emulation with softmmu,
replace the system emulation check by !user emulation one.
Invert some if() ladders for clarity.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230613133347.82210-4-philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We don't build any user emulation target for Tricore,
only the system emulation. No need to check for it as
it is always defined.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Message-Id: <20230613133347.82210-3-philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Since cpu_mmu_index() is well-defined for user-only,
we can remove the surrounding #ifdef'ry entirely.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230613133347.82210-2-philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Convert the instructions in the load/store memory tags instruction
group to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-21-peter.maydell@linaro.org
Convert the ASIMD load/store single structure insns to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230602155223.2040685-20-peter.maydell@linaro.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Convert the instructions in the ASIMD load/store multiple structures
instruction classes to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-19-peter.maydell@linaro.org
Convert the instructions in the LDAPR/STLR (unscaled immediate)
group to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-18-peter.maydell@linaro.org
Convert the instructions in the load/store register (pointer
authentication) group ot decodetree: LDRAA, LDRAB.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-17-peter.maydell@linaro.org
Convert the insns in the atomic memory operations group to
decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-16-peter.maydell@linaro.org
Convert the LDR and STR instructions which take a register
plus register offset to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-15-peter.maydell@linaro.org
Convert the LDR and STR instructions which use a 12-bit immediate
offset to decodetree. We can reuse the existing LDR and STR
trans functions for these.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-14-peter.maydell@linaro.org
Convert the load and store instructions which use a 9-bit
immediate offset to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-13-peter.maydell@linaro.org
Convert the "Load register (literal)" instruction class to
decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-11-peter.maydell@linaro.org
Convert the instructions in the load/store exclusive (STXR,
STLXR, LDXR, LDAXR) and load/store ordered (STLR, STLLR,
LDAR, LDLAR) to decodetree.
Note that for STLR, STLLR, LDAR, LDLAR this fixes an under-decoding
in the legacy decoder where we were not checking that the RES1 bits
in the Rs and Rt2 fields were set.
The new function ldst_iss_sf() is equivalent to the existing
disas_ldst_compute_iss_sf(), but it takes the pre-decoded 'ext' field
rather than taking an undecoded two-bit opc field and extracting
'ext' from it. Once all the loads and stores have been converted
to decodetree disas_ldst_compute_iss_sf() will be unused and
can be deleted.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-9-peter.maydell@linaro.org
Convert the exception generation instructions SVC, HVC, SMC, BRK and
HLT to decodetree.
The old decoder decoded the halting-debug insnns DCPS1, DCPS2 and
DCPS3 just in order to then make them UNDEF; as with DRPS, we don't
bother to decode them, but document the patterns in a64.decode.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-8-peter.maydell@linaro.org
Convert MSR (reg), MRS, SYS, SYSL to decodetree. For QEMU these are
all essentially the same instruction (system register access).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-7-peter.maydell@linaro.org
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Convert the MSR (immediate) insn to decodetree. Our implementation
has basically no commonality between the different destinations,
so we decode the destination register in a64.decode.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-6-peter.maydell@linaro.org
Convert the CFINV, XAFLAG and AXFLAG insns to decodetree.
The old decoder handles these in handle_msr_i(), but
the architecture defines them as separate instructions
from MSR (immediate).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-5-peter.maydell@linaro.org
Convert the insns in the "Barriers" instruction class to
decodetree: CLREX, DSB, DMB, ISB and SB.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-4-peter.maydell@linaro.org
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Convert the various instructions in the hint instruction space
to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-3-peter.maydell@linaro.org
In the recent refactoring we missed a few places which should be
calling finalize_memop_asimd() for ASIMD loads and stores but
instead are just calling finalize_memop(); fix these.
For the disas_ldst_single_struct() and disas_ldst_multiple_struct()
cases, this is not a behaviour change because there the size
is never MO_128 and the two finalize functions do the same thing.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
In disas_ldst_reg_imm9() we missed one place where a call to
a gen_mte_check* function should now be passed the memop we
have created rather than just being passed the size. Fix this.
Fixes: 0a9091424d ("target/arm: Pass memop to gen_mte_check1*")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
The LDG instruction loads the tag from a memory address (identified
by [Xn + offset]), and then merges that tag into the destination
register Xt. We implemented this correctly for the case when
allocation tags are enabled, but didn't get it right when ATA=0:
instead of merging the tag bits into Xt, we merged them into the
memory address [Xn + offset] and then set Xt to that.
Merge the tag bits into the old Xt value, as they should be.
Cc: qemu-stable@nongnu.org
Fixes: c15294c1e3 ("target/arm: Implement LDG, STG, ST2G instructions")
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The atomic memory operations are supposed to return the old memory
data value in the destination register. This value is not
sign-extended, even if the operation is the signed minimum or
maximum. (In the pseudocode for the instructions the returned data
value is passed to ZeroExtend() to create the value in the register.)
We got this wrong because we were doing a 32-to-64 zero extend on the
result for 8 and 16 bit data values, rather than the correct amount
of zero extension.
Fix the bug by using ext8u and ext16u for the MO_8 and MO_16 data
sizes rather than ext32u.
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-2-peter.maydell@linaro.org
-----BEGIN PGP SIGNATURE-----
iLMEAAEIAB0WIQS4/x2g0v3LLaCcbCxAov/yOSY+3wUCZIwysgAKCRBAov/yOSY+
39FYA/465KtY2jDt4xG6AdwZDHckfxZQWlrfhyZvtapOkUG4AprOBV2nSS/ukyD4
V8bg2/6cLS0GRKfDsqA3DcxSASWCAggIU4fTSj+DlYOZhNUIq14qzwqciZnO5CIH
QDczSqu2LKRdP9j6MCtzIaZq/8pPDcOlgm7Dyct/kDo/64E2sg==
=rD4j
-----END PGP SIGNATURE-----
Merge tag 'pull-loongarch-20230616' of https://gitlab.com/gaosong/qemu into staging
pull-loongarch-20230616
# -----BEGIN PGP SIGNATURE-----
#
# iLMEAAEIAB0WIQS4/x2g0v3LLaCcbCxAov/yOSY+3wUCZIwysgAKCRBAov/yOSY+
# 39FYA/465KtY2jDt4xG6AdwZDHckfxZQWlrfhyZvtapOkUG4AprOBV2nSS/ukyD4
# V8bg2/6cLS0GRKfDsqA3DcxSASWCAggIU4fTSj+DlYOZhNUIq14qzwqciZnO5CIH
# QDczSqu2LKRdP9j6MCtzIaZq/8pPDcOlgm7Dyct/kDo/64E2sg==
# =rD4j
# -----END PGP SIGNATURE-----
# gpg: Signature made Fri 16 Jun 2023 12:00:18 PM CEST
# gpg: using RSA key B8FF1DA0D2FDCB2DA09C6C2C40A2FFF239263EDF
# gpg: Good signature from "Song Gao <m17746591750@163.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: B8FF 1DA0 D2FD CB2D A09C 6C2C 40A2 FFF2 3926 3EDF
* tag 'pull-loongarch-20230616' of https://gitlab.com/gaosong/qemu:
target/loongarch: Fix CSR.DMW0-3.VSEG check
hw/loongarch: Supplement cpu topology arguments
hw/loongarch: Add numa support
hw/intc: Set physical cpuid route for LoongArch ipi device
hw/loongarch/virt: Add cpu arch_id support
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The previous code checks whether the highest 16 bits of virtual address
equal to that of CSR.DMW0-3. This is incorrect according to the spec,
and is corrected to compare only the highest four bits instead.
Signed-off-by: Jiajie Chen <c@jia.je>
Reviewed-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20230614065556.2397513-1-c@jia.je>
Signed-off-by: Song Gao <gaosong@loongson.cn>
LoongArch ipi device uses physical cpuid to route to different
vcpus rather logical cpuid, and the physical cpuid is the same
with cpuid in acpi dsdt and srat table.
Reviewed-by: Song Gao <gaosong@loongson.cn>
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20230613120552.2471420-3-zhaotianrui@loongson.cn>
Cortex A7 CPUs with an FPU implementing VFPv4 without NEON support
have 16 64-bit FPU registers and not 32 registers. Let users set the
number of VFP registers with a CPU property.
The primary use case of this property is for the Cortex A7 of the
Aspeed AST2600 SoC.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Joel Stanley <joel@jms.id.au>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
* Skip Vector set tail when vta is zero
* Move zc* out of the experimental properties
* Mask the implicitly enabled extensions in isa_string based on priv version
* Rework CPU extension validation and validate MISA changes
* Fixup PMP TLB cacheing errors
* Writing to pmpaddr and MML/MMWP correctly triggers TLB flushes
* Fixup PMP bypass checks
* Deny access if access is partially inside a PMP entry
* Correct OpenTitanState parent type/size
* Fix QEMU crash when NUMA nodes exceed available CPUs
* Fix pointer mask transformation for vector address
* Updates and improvements for Smstateen
* Support disas for Zcm* extensions
* Support disas for Z*inx extensions
* Remove unused decomp_rv32/64 value for vector instructions
* Enable PC-relative translation
* Assume M-mode FW in pflash0 only when "-bios none"
* Support using pflash via -blockdev option
* Add vector registers to log
* Clean up reference of Vector MTYPE
* Remove the check for extra Vector tail elements
* Smepmp: Return error when access permission not allowed in PMP
* Fixes for smsiaddrcfg and smsiaddrcfgh in AIA
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEaukCtqfKh31tZZKWr3yVEwxTgBMFAmSJFRoACgkQr3yVEwxT
gBMUkg/8Cuhqpx+zy7MeouVkyhEjUuhtCWyr0WVZBJzDkVEOrlY6TyR0hb5/o1Js
LZf6ZMF6JQDN78bmUct8yFBZBGafey5tyonDCsnD7CNQuLPf2NSjTHhu9n5hKFqF
F8Mpn9iFu6k1pr0iF7FbCccVWuDb3P4h2PaM0iFhmf4uz42BCMYdgJThhvv38xlt
jr6A3dcjTpp8yB+iRCuhL2IU2XVee0XBiDUECqRXd0gmtOtqJNST8L+l8YkLy1VO
WUMe8RCO6NMP7BLJ383WwCDeiFTo0mJebZQ0eR/G1xEhy7c8BBMh/CgQmq2F3wDZ
Q0biaeozADgAaCC7aOAHI+1sAoMhOm1v2WhIVmh+XXUqT9856cKwc7DUPBmzb9Sj
N5Zh+t9WCnZG7qpfxvkDF0Y/aRODMHZ1BW5L/ky9yBtyuRwXOJ6VycZTFyRkSwnN
Gd/s9IClDOP1IP5s4TSMGGdelk4lH97x7fZE/2hxn59lp761JtMxbaEceBtqaBh8
zNMTNN/KHs8LeiIBI2ZZ+nQav452Y6XYBivQ7OdsI8xkjnjG9gfgXXjvX1TIh0ow
Hy5ZxtAtjXty49Gmjkx5VcBx4auJcnRDlLTzoZjTxq1te+gEWpw6O1EsEKasVLZe
uN6PxTOxS3nHvRvPgQc1xNUdhDRqBaYsju6b9YmMxz1uefAjGM0=
=fOTc
-----END PGP SIGNATURE-----
Merge tag 'pull-riscv-to-apply-20230614' of https://github.com/alistair23/qemu into staging
Second RISC-V PR for 8.1
* Skip Vector set tail when vta is zero
* Move zc* out of the experimental properties
* Mask the implicitly enabled extensions in isa_string based on priv version
* Rework CPU extension validation and validate MISA changes
* Fixup PMP TLB cacheing errors
* Writing to pmpaddr and MML/MMWP correctly triggers TLB flushes
* Fixup PMP bypass checks
* Deny access if access is partially inside a PMP entry
* Correct OpenTitanState parent type/size
* Fix QEMU crash when NUMA nodes exceed available CPUs
* Fix pointer mask transformation for vector address
* Updates and improvements for Smstateen
* Support disas for Zcm* extensions
* Support disas for Z*inx extensions
* Remove unused decomp_rv32/64 value for vector instructions
* Enable PC-relative translation
* Assume M-mode FW in pflash0 only when "-bios none"
* Support using pflash via -blockdev option
* Add vector registers to log
* Clean up reference of Vector MTYPE
* Remove the check for extra Vector tail elements
* Smepmp: Return error when access permission not allowed in PMP
* Fixes for smsiaddrcfg and smsiaddrcfgh in AIA
# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEEaukCtqfKh31tZZKWr3yVEwxTgBMFAmSJFRoACgkQr3yVEwxT
# gBMUkg/8Cuhqpx+zy7MeouVkyhEjUuhtCWyr0WVZBJzDkVEOrlY6TyR0hb5/o1Js
# LZf6ZMF6JQDN78bmUct8yFBZBGafey5tyonDCsnD7CNQuLPf2NSjTHhu9n5hKFqF
# F8Mpn9iFu6k1pr0iF7FbCccVWuDb3P4h2PaM0iFhmf4uz42BCMYdgJThhvv38xlt
# jr6A3dcjTpp8yB+iRCuhL2IU2XVee0XBiDUECqRXd0gmtOtqJNST8L+l8YkLy1VO
# WUMe8RCO6NMP7BLJ383WwCDeiFTo0mJebZQ0eR/G1xEhy7c8BBMh/CgQmq2F3wDZ
# Q0biaeozADgAaCC7aOAHI+1sAoMhOm1v2WhIVmh+XXUqT9856cKwc7DUPBmzb9Sj
# N5Zh+t9WCnZG7qpfxvkDF0Y/aRODMHZ1BW5L/ky9yBtyuRwXOJ6VycZTFyRkSwnN
# Gd/s9IClDOP1IP5s4TSMGGdelk4lH97x7fZE/2hxn59lp761JtMxbaEceBtqaBh8
# zNMTNN/KHs8LeiIBI2ZZ+nQav452Y6XYBivQ7OdsI8xkjnjG9gfgXXjvX1TIh0ow
# Hy5ZxtAtjXty49Gmjkx5VcBx4auJcnRDlLTzoZjTxq1te+gEWpw6O1EsEKasVLZe
# uN6PxTOxS3nHvRvPgQc1xNUdhDRqBaYsju6b9YmMxz1uefAjGM0=
# =fOTc
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 14 Jun 2023 03:17:14 AM CEST
# gpg: using RSA key 6AE902B6A7CA877D6D659296AF7C95130C538013
# gpg: Good signature from "Alistair Francis <alistair@alistair23.me>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 6AE9 02B6 A7CA 877D 6D65 9296 AF7C 9513 0C53 8013
* tag 'pull-riscv-to-apply-20230614' of https://github.com/alistair23/qemu: (60 commits)
hw/intc: If mmsiaddrcfgh.L == 1, smsiaddrcfg and smsiaddrcfgh are read-only.
target/riscv: Smepmp: Return error when access permission not allowed in PMP
target/riscv/vector_helper.c: Remove the check for extra tail elements
target/riscv/vector_helper.c: clean up reference of MTYPE
target/riscv: Fix initialized value for cur_pmmask
util/log: Add vector registers to log
docs/system: riscv: Add pflash usage details
riscv/virt: Support using pflash via -blockdev option
hw/riscv: virt: Assume M-mode FW in pflash0 only when "-bios none"
target/riscv: Remove pc_succ_insn from DisasContext
target/riscv: Enable PC-relative translation
target/riscv: Use true diff for gen_pc_plus_diff
target/riscv: Change gen_set_pc_imm to gen_update_pc
target/riscv: Change gen_goto_tb to work on displacements
target/riscv: Introduce cur_insn_len into DisasContext
target/riscv: Fix target address to update badaddr
disas/riscv.c: Remove redundant parentheses
disas/riscv.c: Fix lines with over 80 characters
disas/riscv.c: Remove unused decomp_rv32/64 value for vector instructions
disas/riscv.c: Support disas for Z*inx extensions
...
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Since commit 139c1837db ("meson: rename included C source files
to .c.inc"), QEMU standard procedure for included C files is to
use *.c.inc.
Besides, since commit 6a0057aa22 ("docs/devel: make a statement
about includes") this is documented as the Coding Style:
If you do use template header files they should be named with
the ``.c.inc`` or ``.h.inc`` suffix to make it clear they are
being included for expansion.
Therefore move the included templates in the tcg/ directory and
rename as '.h.inc'.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230608133108.72655-5-philmd@linaro.org>
Move the #ifdef'ry inside do_cpu_init() instead of
declaring an empty stub for user emulation.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230602224628.59546-3-philmd@linaro.org>
Since commit 604664726f ("target/i386: Restrict cpu_exec_interrupt()
handler to sysemu"), do_cpu_sipi() isn't called anymore on user
emulation. Remove the now pointless stub.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230602224628.59546-2-philmd@linaro.org>
int_helper.c only contains system emulation code:
remove the #ifdef'ry and move the file to the meson
softmmu source set.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230602223016.58647-1-philmd@linaro.org>
On an address match, skip checking for default permissions and return error
based on access defined in PMP configuration.
v3 Changes:
o Removed explicit return of boolean value from comparision
of priv/allowed_priv
v2 Changes:
o Removed goto to return in place when address matches
o Call pmp_hart_has_privs_default at the end of the loop
Fixes: 90b1fafce0 ("target/riscv: Smepmp: Skip applying default rules when address matches")
Signed-off-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Message-Id: <20230605164548.715336-1-hchauhan@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Commit 752614cab8 ("target/riscv: rvv: Add tail agnostic for vector
load / store instructions") added an extra check for LMUL fragmentation,
intended for setting the "rest tail elements" in the last register for a
segment load insn.
Actually, the max_elements derived in vext_ld*() won't be a fraction of
vector register size, since the lmul encoded in desc is emul, which has
already been adjusted to 1 for LMUL fragmentation case by vext_get_emul()
in trans_rvv.c.inc, for ld_stride(), ld_us(), ld_index() and ldff().
Besides, vext_get_emul() has also taken EEW/SEW into consideration, so no
need to call vext_get_total_elems() which would base on the emul to derive
another emul, the second emul would be incorrect when esz differs from sew.
Thus this patch removes the check for extra tail elements.
Fixes: 752614cab8 ("target/riscv: rvv: Add tail agnostic for vector load / store instructions")
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Message-Id: <20230607091646.4049428-1-xiao.w.wang@intel.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
There's no code using MTYPE, which was a concept used in older vector
implementation.
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Message-Id: <20230608053517.4102648-1-xiao.w.wang@intel.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
We initialize cur_pmmask as -1(UINT32_MAX/UINT64_MAX) and regard it
as if pointer mask is disabled in current implementation. However,
the addresses for vector load/store will be adjusted to zero in this
case and -1(UINT32_MAX/UINT64_MAX) is valid value for pmmask when
pointer mask is enabled.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Message-Id: <20230610094651.43786-1-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
pc_succ_insn is no longer useful after the introduce of cur_insn_len
and all pc related value use diff value instead of absolute value.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230526072124.298466-8-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Add a base pc_save for PC-relative translation(CF_PCREL).
Diable the directly sync pc from tb by riscv_cpu_synchronize_from_tb.
Use gen_pc_plus_diff to get the pc-relative address.
Enable CF_PCREL in System mode.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230526072124.298466-7-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reduce reliance on absolute values by using true pc difference for
gen_pc_plus_diff() to prepare for PC-relative translation.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230526072124.298466-6-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reduce reliance on absolute values(by passing pc difference) to
prepare for PC-relative translation.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230526072124.298466-5-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reduce reliance on absolute value to prepare for PC-relative translation.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230526072124.298466-4-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Use cur_insn_len to store the length of the current instruction to
prepare for PC-relative translation.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230526072124.298466-3-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Compute the target address before storing it into badaddr
when mis-aligned exception is triggered.
Use a target_pc temp to store the target address to avoid
the confusing operation that udpate target address into
cpu_pc before misalign check, then update it into badaddr
and restore cpu_pc to current pc if exception is triggered.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230526072124.298466-2-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Pass RISCVCPUConfig as disassemble_info.target_info to support disas
of conflict instructions related to specific extensions.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230523093539.203909-4-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Split RISCVCPUConfig declarations to prepare for passing it to disas.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-Id: <20230523093539.203909-3-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Add knobs to allow users to enable smstateen and also export it via the
ISA extension string.
Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Reviewed-by: Weiwei Li<liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230518175058.2772506-4-mchitale@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
When misa.F is 0 tb->flags.FS field is unused and can be used to save
the current state of smstateen0.FCSR check which is needed by the
floating point translation routines.
Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Message-Id: <20230518175058.2772506-3-mchitale@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Implement the s/h/mstateen.fcsr bit as defined in the smstateen spec
and check for it when accessing the fcsr register and its fields.
Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230518175058.2772506-2-mchitale@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
write_mstatus() can only change current xl when in debug mode.
And we need update cur_pmmask/base in this case.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Message-Id: <20230524015933.17349-3-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Access will fail if access is partially inside the PMP entry.
However,only setting ret = false doesn't really mean pmp violation
since pmp_hart_has_privs_default() may return true at the end of
pmp_hart_has_privs().
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517091519.34439-13-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Use pmp_update_rule_addr() and pmp_update_rule_nums() separately to
update rule nums only once for each pmpcfg_csr_write. Then remove
pmp_update_rule() since it become unused.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517091519.34439-12-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
TLB needn't be flushed when pmpcfg/pmpaddr don't changes.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Message-Id: <20230517091519.34439-11-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
TLB should be flushed not only for pmpcfg csr changes, but also for
pmpaddr csr changes.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Message-Id: <20230517091519.34439-10-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Currently only the rule addr of the same index of pmpaddr is updated
when pmpaddr CSR is modified. However, the rule addr of next PMP entry
may also be affected if its A field is PMP_AMATCH_TOR. So we should
also update it in this case.
Write to pmpaddr CSR will not affect the rule nums, So we needn't update
call pmp_update_rule_nums() in pmpaddr_csr_write().
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517091519.34439-9-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
MMWP and MML bits may affect the allowed privs of PMP entries and the
default privs, both of which may change the allowed privs of exsited
TLB entries. So we need flush TLB when they are changed.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517091519.34439-8-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The addr and size parameters in pmp_hart_has_privs_default() are unused.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517091519.34439-7-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
RLB/MML/MMWP bits in mseccfg CSR are introduced by Smepmp extension.
So they can only be writable and set to 1s when cfg.epmp is true.
Then we also need't check on epmp in pmp_hart_has_privs_default().
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517091519.34439-6-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
We no longer need the pmp_index for matched PMP entry now.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517091519.34439-5-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Return the result directly for short cut, since We needn't do the
following check on the PMP entries if there is no PMP rules.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517091519.34439-4-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
pmp_get_tlb_size can be separated from get_physical_address_pmp and is only
needed when ret == TRANSLATE_SUCCESS.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517091519.34439-3-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
PMP entries before (including) the matched PMP entry may only cover partial
of the TLB page, and this may split the page into regions with different
permissions. Such as for PMP0 (0x80000008~0x8000000F, R) and PMP1 (0x80000000~
0x80000FFF, RWX), write access to 0x80000000 will match PMP1. However we cannot
cache the translation result in the TLB since this will make the write access
to 0x80000008 bypass the check of PMP0. So we should check all of them instead
of the matched PMP entry in pmp_get_tlb_size() and set the tlb_size to 1 in
this case.
Set tlb_size to TARGET_PAGE_SIZE if PMP is not support or there is no PMP rules.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517091519.34439-2-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
write_misa() must use as much common logic as possible. We want to open
code just the bits that are exclusive to the CSR write operation and TCG
internals.
Our validation is done with riscv_cpu_validate_set_extensions(), but we
need a small tweak first. When enabling RVG we're doing:
env->misa_ext |= RVI | RVM | RVA | RVF | RVD;
env->misa_ext_mask = env->misa_ext;
This works fine for realize() time but this can potentially overwrite
env->misa_ext_mask if we reutilize the function for write_misa().
Instead of doing misa_ext_mask = misa_ext, sum up the RVG extensions in
misa_ext_mask as well. This won't change realize() time behavior
(misa_ext_mask will be == misa_ext) and will ensure that write_misa()
won't change misa_ext_mask by accident.
After that, rewrite write_misa() to work as follows:
- mask the write using misa_ext_mask to avoid enabling unsupported
extensions;
- suppress RVC if the next insn isn't aligned;
- disable RVG if any of RVG dependencies are being disabled by the user;
- assign env->misa_ext and run riscv_cpu_validate_set_extensions(). On
error, rollback env->misa_ext to its original value, logging a
GUEST_ERROR to inform the user about the failed write;
- handle RVF and MSTATUS_FS and continue as usual.
Let's keep write_misa() as experimental for now until this logic gains
enough mileage.
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-Id: <20230517135714.211809-12-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
We have 4 config settings being done in riscv_cpu_init(): ext_ifencei,
ext_icsr, mmu and pmp. This is also the constructor of the "riscv-cpu"
device, which happens to be the parent device of every RISC-V cpu.
The result is that these 4 configs are being set every time, and every
other CPU should always account for them. CPUs such as sifive_e need to
disable settings that aren't enabled simply because the parent class
happens to be enabling it.
Moving all configurations from the parent class to each CPU will
centralize the config of each CPU into its own init(), which is clearer
than having to account to whatever happens to be set in the parent
device. These settings are also being set in register_cpu_props() when
no 'misa_ext' is set, so for these CPUs we don't need changes. Named
CPUs will receive all cfgs that the parent were setting into their
init().
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517135714.211809-11-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
There is no need to init timers if we're not even sure that our
extensions are valid. Execute riscv_cpu_validate_set_extensions() before
riscv_timer_init().
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517135714.211809-10-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Let's remove more code that is open coded in riscv_cpu_realize() and put
it into a helper. Let's also add an error message instead of just
asserting out if env->misa_mxl_max != env->misa_mlx.
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517135714.211809-9-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
We're doing env->priv_spec validation and assignment at the start of
riscv_cpu_realize(), which is fine, but then we're doing a force disable
on extensions that aren't compatible with the priv version.
This second step is being done too early. The disabled extensions might be
re-enabled again in riscv_cpu_validate_set_extensions() by accident. A
better place to put this code is at the end of
riscv_cpu_validate_set_extensions() after all the validations are
completed.
Add a new helper, riscv_cpu_disable_priv_spec_isa_exts(), to disable the
extesions after the validation is done. While we're at it, create a
riscv_cpu_validate_priv_spec() helper to host all env->priv_spec related
validation to unclog riscv_cpu_realize a bit.
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517135714.211809-8-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Even though Zca/Zcf/Zcd can be included by C/F/D, however, their priv
version is higher than the priv version of C/F/D. So if we use check
for them instead of check for C/F/D totally, it will trigger new
problem when we try to disable the extensions based on the configured
priv version.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517135714.211809-7-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Using implicitly enabled extensions such as Zca/Zcf/Zcd instead of their
super extensions can simplify the extension related check. However, they
may have higher priv version than their super extensions. So we should mask
them in the isa_string based on priv version to make them invisible to user
if the specified priv version is lower than their minimal priv version.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517135714.211809-6-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
All these generic CPUs are using the latest priv available, at this
moment PRIV_VERSION_1_12_0:
- riscv_any_cpu_init()
- rv32_base_cpu_init()
- rv64_base_cpu_init()
- rv128_base_cpu_init()
Create a new PRIV_VERSION_LATEST enum and use it in those cases. I'll
make it easier to update everything at once when a new priv version is
available.
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517135714.211809-5-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The setter is doing nothing special. Just set env->priv_ver directly.
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517135714.211809-4-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
This setter is doing nothing else but setting env->vext_ver. Assign the
value directly.
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517135714.211809-3-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The RVV verification will error out if fails and it's being done at the
end of riscv_cpu_validate_set_extensions(), after we've already set some
extensions that are dependent on RVV. Let's put it in its own function
and do it earlier.
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230517135714.211809-2-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Zc* extensions (version 1.0) are ratified.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Message-Id: <20230510030040.20528-1-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The function is a no-op if 'vta' is zero but we're still doing a lot of
stuff in this function regardless. vext_set_elems_1s() will ignore every
single time (since vta is zero) and we just wasted time.
Skip it altogether in this case. Aside from the code simplification
there's a noticeable emulation performance gain by doing it. For a
regular C binary that does a vectors operation like this:
=======
#define SZ 10000000
int main ()
{
int *a = malloc (SZ * sizeof (int));
int *b = malloc (SZ * sizeof (int));
int *c = malloc (SZ * sizeof (int));
for (int i = 0; i < SZ; i++)
c[i] = a[i] + b[i];
return c[SZ - 1];
}
=======
Emulating it with qemu-riscv64 and RVV takes ~0.3 sec:
$ time ~/work/qemu/build/qemu-riscv64 \
-cpu rv64,debug=false,vext_spec=v1.0,v=true,vlen=128 ./foo.out
real 0m0.303s
user 0m0.281s
sys 0m0.023s
With this skip we take ~0.275 sec:
$ time ~/work/qemu/build/qemu-riscv64 \
-cpu rv64,debug=false,vext_spec=v1.0,v=true,vlen=128 ./foo.out
real 0m0.274s
user 0m0.252s
sys 0m0.019s
This performance gain adds up fast when executing heavy benchmarks like
SPEC.
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Palmer Dabbelt <palmer@rivosinc.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Message-Id: <20230427205708.246679-2-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
This queue includes several assorted fixes for target/ppc emulation and
XIVE2. It also includes an openpic fix, an avocado fix for ppc64
binaries without slipr and a Kconfig change for MAC_NEWWORLD.
-----BEGIN PGP SIGNATURE-----
iIwEABYKADQWIQQX6/+ZI9AYAK8oOBk82cqW3gMxZAUCZIR6uhYcZGFuaWVsaGI0
MTNAZ21haWwuY29tAAoJEDzZypbeAzFksQsA/jucd+qsZ9mmJ9SYVd4umMnC/4bC
i4CHo/XcHb0DzyBXAQCLxMA+KSTkP+yKv3edra4I5K9qjTW1H+pEOWamh1lvDw==
=EezE
-----END PGP SIGNATURE-----
Merge tag 'pull-ppc-20230610' of https://gitlab.com/danielhb/qemu into staging
ppc patch queue for 2023-06-10:
This queue includes several assorted fixes for target/ppc emulation and
XIVE2. It also includes an openpic fix, an avocado fix for ppc64
binaries without slipr and a Kconfig change for MAC_NEWWORLD.
# -----BEGIN PGP SIGNATURE-----
#
# iIwEABYKADQWIQQX6/+ZI9AYAK8oOBk82cqW3gMxZAUCZIR6uhYcZGFuaWVsaGI0
# MTNAZ21haWwuY29tAAoJEDzZypbeAzFksQsA/jucd+qsZ9mmJ9SYVd4umMnC/4bC
# i4CHo/XcHb0DzyBXAQCLxMA+KSTkP+yKv3edra4I5K9qjTW1H+pEOWamh1lvDw==
# =EezE
# -----END PGP SIGNATURE-----
# gpg: Signature made Sat 10 Jun 2023 06:29:30 AM PDT
# gpg: using EDDSA key 17EBFF9923D01800AF2838193CD9CA96DE033164
# gpg: issuer "danielhb413@gmail.com"
# gpg: Good signature from "Daniel Henrique Barboza <danielhb413@gmail.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 17EB FF99 23D0 1800 AF28 3819 3CD9 CA96 DE03 3164
* tag 'pull-ppc-20230610' of https://gitlab.com/danielhb/qemu: (29 commits)
hw/ppc/Kconfig: MAC_NEWWORLD should always select USB_OHCI_PCI
target/ppc: Implement gathering irq statistics
tests/avocado/tuxrun_baselines: Fix ppc64 tests for binaries without slirp
hw/ppc/openpic: Do not open-code ROUND_UP() macro
target/ppc: Decrementer fix BookE semantics
target/ppc: Fix decrementer time underflow and infinite timer loop
target/ppc: Rework store conditional to avoid branch
target/ppc: Remove larx/stcx. memory barrier semantics
target/ppc: Ensure stcx size matches larx
target/ppc: Fix lqarx to set cpu_reserve
target/ppc: Eliminate goto in mmubooke_check_tlb()
target/ppc: Change ppcemb_tlb_check() to return bool
target/ppc: Simplify ppcemb_tlb_search()
target/ppc: Remove some unneded line breaks
target/ppc: Move ppcemb_tlb_search() to mmu_common.c
target/ppc: Remove "ext" parameter of ppcemb_tlb_check()
target/ppc: Remove single use function
target/ppc: PMU implement PERFM interrupts
target/ppc: Support directed privileged doorbell interrupt (SDOOR)
target/ppc: Fix msgclrp interrupt type
...
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Count exceptions which can be queried with info irq monitor command.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230606220200.7EBCC74635C@zero.eik.bme.hu>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Rework store conditional to avoid a branch in the success case.
Change some of the variable names and layout while here so
gen_conditional_store more closely matches gen_stqcx_.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20230605025445.161932-4-npiggin@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
larx and stcx. are not defined to order any memory operations.
Remove the barriers.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20230605025445.161932-3-npiggin@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Differently-sized larx/stcx. pairs can succeed if the starting address
matches. Add a check to require the size of stcx. exactly match the larx
that established the reservation. Use the term "reserve_length" for this
state, which matches the terminology used in the ISA.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20230605025445.161932-2-npiggin@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
lqarx does not set cpu_reserve, which causes stqcx. to never succeed.
Cc: qemu-stable@nongnu.org
Fixes: 94bf265867 ("target/ppc: Use atomic load for LQ and LQARX")
Fixes: 57b38ffd0c ("target/ppc: Use tcg_gen_qemu_{ld,st}_i128 for LQARX, LQ, STQ")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230605025445.161932-1-npiggin@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Move out checking PID registers into a separate function which makes
mmubooke_check_tlb() simpler and avoids using goto.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <bd84d5f38af0ba2983ccd5c07635db49267c828f.1685448535.git.balaton@eik.bme.hu>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
No nead to store return value and break from loop when we can return
directly.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <d470118c3adcbd41b1a91779f6bb7cbdb2b0d346.1685448535.git.balaton@eik.bme.hu>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Make lines shorter and fix indentation in some functions prototypes.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <70952ba2d82141db1cf5cfcf4b227402be575874.1685448535.git.balaton@eik.bme.hu>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
This function is the only reason why ppcemb_tlb_check() is not static
to mmu_common.c but it also better fits in mmu_common.c so move it
there.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <b64fd712a773558dea9b84945c57785546c0ae2e.1685448535.git.balaton@eik.bme.hu>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
This is only used by one caller so simplify function by removing this
parameter and move the operation to the single place where it's used.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <b21f11ae20e8a8c2e8b5d943f2bff12b5356005a.1685448535.git.balaton@eik.bme.hu>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The get_physical_address() function is a trivial wrapper of
get_physical_address_wtlb() that is only used once. Remove it and call
get_physical_address_wtlb() directly instead.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <302697d63d26caebefaeee1e45352145ebd0318a.1685448535.git.balaton@eik.bme.hu>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The PMU raises a performance monitor exception (causing an interrupt
when MSR[EE]=1) when MMCR0[PMAO] is set, and lowers it when clear.
Wire this up and implement the interrupt delivery for books. Linux perf
record can now collect PMI-driven samples.
fire_PMC_interrupt is renamed to perfm_alert, which matches a bit closer
to the new terminology used in the ISA and distinguishes the alert
condition (e.g., counter overflow) from the PERFM (or EBB) interrupts.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20230530134313.387252-2-npiggin@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
BookS msgsndp instruction to self or DPDES register can cause SDOOR
interrupts which crash QEMU with exception not implemented.
Linux does not use msgsndp in SMT1, and KVM only uses DPDES to cause
doorbells when emulating a SMT guest (which is not the default), so
this has gone unnoticed.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20230530130526.372701-1-npiggin@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
msgclrp matches msgsndp and should clear PPC_INTERRUPT_DOORBELL.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20230530130714.373215-1-npiggin@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
FCECE does not get cleared according to the ISA v3.1B.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20230530134313.387252-1-npiggin@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Some of the PMU hflags bits can go out of synch, for example a store to
MMCR0 with PMCjCE=1 fails to update hflags correctly and results in
hflags mismatch:
qemu: fatal: TCG hflags mismatch (current:0x2408003d rebuilt:0x240a003d)
This can be reproduced by running perf on a recent machine.
Some of the fragility here is the duplication of PMU hflags calculations.
This change consolidates that in a single place to update pmu-related
hflags, to be called after a well defined state changes.
The post-load PMU update is pulled out of the MSR update because it does
not depend on the MSR value.
Fixes: 8b3d1c49a9 ("target/ppc: Add new PMC HFLAGS")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20230530130447.372617-1-npiggin@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
ppc hypervisors turn HEAI interrupts into program interrupts injected
into the guest that executed the illegal instruction, if the hypervisor
doesn't handle it some other way.
The nested-hv implementation failed to account for this HEAI->program
conversion. The virtual hypervisor wants to see the HEAI when running
a nested guest, so that interrupt type can be returned to its KVM
caller.
Fixes: 7cebc5db2e ("target/ppc: Introduce a vhyp framework for nested HV support")
Cc: balaton@eik.bme.hu
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20230530132127.385001-1-npiggin@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Use the FloatRelation enum to hold the comparison result (missed
in commit 71bfd65c5f "softfloat: Name compare relation enum").
Inspired-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
- Add simple tests written in C
-----BEGIN PGP SIGNATURE-----
iQJTBAABCgA9FiEEbmNqfoPy3Qz6bm43CtLGOWtpyhQFAmSArqkfHGtiYXN0aWFu
QG1haWwudW5pLXBhZGVyYm9ybi5kZQAKCRAK0sY5a2nKFKKTD/0dXpexGX7K62d4
dLZwj9AHOa/9NOD4E6+ub2UYovF7UlydSzy+mgJyxiIUGaPBHYe3dFfktHOppSwn
OGJp9TTqeutpXYF8/zvDnmf+TDJ71DRQWM40uC0HAUXrjUO43PiK0LMh+fm5D9uG
vRHSDGBWUNZrNxTZSj8Kx7Sb7PkqeB8qWvpIJh1AVVhIT+dyoAp5V7EkAETpwhtz
a7qKclQpFNWdaYnthCF5wcmoMawQKMUnO96j0lQWIXfnhDP7XmrWlNM7Ry2cMPEy
aRxAT3hTOr2dD2Cic8brF/w1NeXJjxWDz65uD7X7Rog54+SD4+SmfZiYwdAs5YMo
1XTkbG0qE6HwrtaO+nZDQZFc8tdvLSlDcBd/cjOonwxJyvJVX8qjR2Ufb3PSfTct
85R5wRBsEapSbQwicwu71fK0N8ZVkLM3fc4nFEKMxOx8I66eJm3bcTxpT/A8knEw
OwykEVB3Xiq45JlWOV2BkyZJ1EiHeFQzZfzckW4bYFDdCnuMHlaVG9qpKEAu7mQh
4Ug3Y4KhYqZA0UDHG6Ik6Ms64FYU4s+zbFYic/Jhew9NC4MoMXa8oKqXJC5W7RCl
1HesyBPu1i/45Xk6/kneJ16YO+i4hcH+Hp/osJrRE6qFHGtbvKJ0EFy1471YHyLf
HjRQBmsd9XW2TaYBQgST80UvCh9CkQ==
=5m1P
-----END PGP SIGNATURE-----
Merge tag 'pull-tricore-20230607' of https://github.com/bkoppelmann/qemu into staging
- Refactor PCXI/ICR field handling in newer ISA versions
- Add simple tests written in C
# -----BEGIN PGP SIGNATURE-----
#
# iQJTBAABCgA9FiEEbmNqfoPy3Qz6bm43CtLGOWtpyhQFAmSArqkfHGtiYXN0aWFu
# QG1haWwudW5pLXBhZGVyYm9ybi5kZQAKCRAK0sY5a2nKFKKTD/0dXpexGX7K62d4
# dLZwj9AHOa/9NOD4E6+ub2UYovF7UlydSzy+mgJyxiIUGaPBHYe3dFfktHOppSwn
# OGJp9TTqeutpXYF8/zvDnmf+TDJ71DRQWM40uC0HAUXrjUO43PiK0LMh+fm5D9uG
# vRHSDGBWUNZrNxTZSj8Kx7Sb7PkqeB8qWvpIJh1AVVhIT+dyoAp5V7EkAETpwhtz
# a7qKclQpFNWdaYnthCF5wcmoMawQKMUnO96j0lQWIXfnhDP7XmrWlNM7Ry2cMPEy
# aRxAT3hTOr2dD2Cic8brF/w1NeXJjxWDz65uD7X7Rog54+SD4+SmfZiYwdAs5YMo
# 1XTkbG0qE6HwrtaO+nZDQZFc8tdvLSlDcBd/cjOonwxJyvJVX8qjR2Ufb3PSfTct
# 85R5wRBsEapSbQwicwu71fK0N8ZVkLM3fc4nFEKMxOx8I66eJm3bcTxpT/A8knEw
# OwykEVB3Xiq45JlWOV2BkyZJ1EiHeFQzZfzckW4bYFDdCnuMHlaVG9qpKEAu7mQh
# 4Ug3Y4KhYqZA0UDHG6Ik6Ms64FYU4s+zbFYic/Jhew9NC4MoMXa8oKqXJC5W7RCl
# 1HesyBPu1i/45Xk6/kneJ16YO+i4hcH+Hp/osJrRE6qFHGtbvKJ0EFy1471YHyLf
# HjRQBmsd9XW2TaYBQgST80UvCh9CkQ==
# =5m1P
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 07 Jun 2023 09:22:01 AM PDT
# gpg: using RSA key 6E636A7E83F2DD0CFA6E6E370AD2C6396B69CA14
# gpg: issuer "kbastian@mail.uni-paderborn.de"
# gpg: Good signature from "Bastian Koppelmann <kbastian@mail.uni-paderborn.de>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 6E63 6A7E 83F2 DD0C FA6E 6E37 0AD2 C639 6B69 CA14
* tag 'pull-tricore-20230607' of https://github.com/bkoppelmann/qemu:
tests/tcg/tricore: Add recursion test for CSAs
target/tricore: Fix wrong PSW for call insns
target/tricore: Refactor PCXI/ICR register fields
tests/tcg/tricore: Add first C program
tests/tcg/tricore: Uses label for memory addresses
tests/tcg/tricore: Move asm tests into 'asm' directory
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
we were copying PSW into a local variable, updated PSW.CDE in the local
and never wrote it back. So when we called save_context_upper() we were
using the non-local version of PSW which did not contain the updated
PSW.CDE.
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Message-Id: <20230526061946.54514-6-kbastian@mail.uni-paderborn.de>
starting from ISA version 1.6.1 (previously known as 1.6P/E), some
bitfields in PCXI and ICR have changed. We also refactor these
registers using the register fields API.
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1453
Message-Id: <20230526061946.54514-5-kbastian@mail.uni-paderborn.de>
Fixes the build for --disable-tcg.
This header is only needed for cross-hosting. Without CONFIG_TCG,
we know this is an AArch64 host, CONFIG_ATOMIC64 will be set, and
the TCG_OVERSIZED_GUEST block will never be compiled.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* meson.build tweaks
* revert avocado update
* always upgrade/downgrade locally installed Python packages
* switch from submodules to subprojects
* remove --with-git= option
* rename --enable-pypi to --enable-download, control submodules and subprojects too
-----BEGIN PGP SIGNATURE-----
iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmR/Qu8UHHBib256aW5p
QHJlZGhhdC5jb20ACgkQv/vSX3jHroMmSwgAj5SHD8R+5D1UmptzBvI/72CfgqVv
MJa4O1LvHwUkuSmxX1MFFhRa0mo0bu6j+bPpvJ29zKS61ybVwJl87gnsRcDAMXe7
08YbcG35Chox6aZxbidUQtXm18JZ3F2aMtmxUuP0PR7LDjVXLV5FsjrHTIt8KuEZ
vUqq3IsVbc4FxCCC0ke2DzrtgpRCxYSdfPrj/t5WzAztAXId9r1zvUlCLN+FUpri
E3KIZYpkXZyOnJQ9W30KnsZo5QtDACwlIMBK6whSdoCjyNN7TwDdhNW8QkOueNO6
q3tLfwf5+u6uyEoaQTW+teE2oMXT8N4IJllRJj2RyQ1BFD49XhUUJmc33Q==
=b9QD
-----END PGP SIGNATURE-----
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
* finish atomics revamp
* meson.build tweaks
* revert avocado update
* always upgrade/downgrade locally installed Python packages
* switch from submodules to subprojects
* remove --with-git= option
* rename --enable-pypi to --enable-download, control submodules and subprojects too
# -----BEGIN PGP SIGNATURE-----
#
# iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmR/Qu8UHHBib256aW5p
# QHJlZGhhdC5jb20ACgkQv/vSX3jHroMmSwgAj5SHD8R+5D1UmptzBvI/72CfgqVv
# MJa4O1LvHwUkuSmxX1MFFhRa0mo0bu6j+bPpvJ29zKS61ybVwJl87gnsRcDAMXe7
# 08YbcG35Chox6aZxbidUQtXm18JZ3F2aMtmxUuP0PR7LDjVXLV5FsjrHTIt8KuEZ
# vUqq3IsVbc4FxCCC0ke2DzrtgpRCxYSdfPrj/t5WzAztAXId9r1zvUlCLN+FUpri
# E3KIZYpkXZyOnJQ9W30KnsZo5QtDACwlIMBK6whSdoCjyNN7TwDdhNW8QkOueNO6
# q3tLfwf5+u6uyEoaQTW+teE2oMXT8N4IJllRJj2RyQ1BFD49XhUUJmc33Q==
# =b9QD
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 06 Jun 2023 07:30:07 AM PDT
# gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg: issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [unknown]
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (21 commits)
configure: remove --with-git-submodules=
build: remove git submodule handling from main makefile
meson: subprojects: replace berkeley-{soft,test}float-3 with wraps
pc-bios/s390-ccw: always build network bootloader
configure: move SLOF submodule handling to pc-bios/s390-ccw
meson: subprojects: replace submodules with wrap files
build: log submodule update from git-submodule.sh
git-submodule: allow partial update of .git-submodule-status
configure: rename --enable-pypi to --enable-download, control subprojects too
configure: remove --with-git= option
mkvenv: always pass locally-installed packages to pip
tests: Use separate virtual environment for avocado
Revert "tests/requirements.txt: bump up avocado-framework version to 101.0"
scsi/qemu-pr-helper: Drop support for 'old' libmultipath API
meson.build: Use -Wno-undef only for SDL2 versions that need it
meson.build: Group the audio backend entries in a separate summary section
meson.build: Group the network backend entries in a separate summary section
meson.build: Group the UI entries in a separate summary section
scripts: remove dead file
atomics: eliminate mb_read/mb_set
...
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Accessing EL0-accessible Debug Communication Channel (DCC) registers in
user mode emulation is currently enabled. However, it does not match
Linux behavior as Linux sets MDSCR_EL1.TDCC on startup to disable EL0
access to DCC (see __cpu_setup() in arch/arm64/mm/proc.S).
This patch fixes access_tdcc() to check MDSCR_EL1.TDCC for EL0 and sets
MDSCR_EL1.TDCC for user mode emulation to match Linux.
Signed-off-by: Zhuojia Shen <chaosdefinition@hotmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: DS7PR12MB630905198DD8E69F6817544CAC4EA@DS7PR12MB6309.namprd12.prod.outlook.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
DC CVAP and DC CVADP instructions can be executed in EL0 on Linux,
either directly when SCTLR_EL1.UCI == 1 or emulated by the kernel (see
user_cache_maint_handler() in arch/arm64/kernel/traps.c).
This patch enables execution of the two instructions in user mode
emulation.
Signed-off-by: Zhuojia Shen <chaosdefinition@hotmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230530191438.411344-21-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Push the mte check behind the exclusive_addr check.
Document the several ways that we are still out of spec
with this implementation.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230530191438.411344-18-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
FEAT_LSE2 only requires that atomic operations not cross a
16-byte boundary. Ordered operations may be completely
unaligned if SCTLR.nAA is set.
Because this alignment check is so special, do it by hand.
Make sure not to keep TCG temps live across the branch.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230530191438.411344-17-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230530191438.411344-16-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Fixes a bug in that with SCTLR.A set, we should raise any
alignment fault before raising any MTE check fault.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230530191438.411344-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Pass the individual memop to gen_mte_checkN.
For the moment, do nothing with it.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230530191438.411344-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Pass the completed memop to gen_mte_check1_mmuidx.
For the moment, do nothing more than extract the size.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230530191438.411344-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We are going to need the complete memop beforehand,
so let's not compute it twice.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230530191438.411344-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We are going to need the complete memop beforehand,
so let's not compute it twice.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230530191438.411344-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is required for LSE2, where the pair must be treated atomically if
it does not cross a 16-byte boundary. But it simplifies the code to do
this always.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230530191438.411344-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
No need to duplicate this check across multiple call sites.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230530191438.411344-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Round len_align to 16 instead of 8, handling an odd 8-byte as part
of the tail. Use MO_ATOM_NONE to indicate that all of these memory
ops have only byte atomicity.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230530191438.411344-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This fixes a bug in that these two insns should have been using atomic
16-byte stores, since MTE is ARMv8.5 and LSE2 is mandatory from ARMv8.4.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230530191438.411344-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
While we don't require 16-byte atomicity here, using a single larger
operation simplifies the code. Introduce finalize_memop_asimd for this.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230530191438.411344-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
While we don't require 16-byte atomicity here, using a single larger
load simplifies the code, and makes it a closer match to STXP.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230530191438.411344-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Let finalize_memop_atom be the new basic function, with
finalize_memop and finalize_memop_pair testing FEAT_LSE2
to apply the appropriate atomicity.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230530191438.411344-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230530191438.411344-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Document the meaning of exclusive_high in a big-endian context,
and why we can't change it now.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230530191438.411344-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Guests can now be debugged through the gdbstub. Support is added for
single-stepping, software breakpoints, hardware breakpoints and
watchpoints. The code has been structured like the KVM counterpart.
While guest debugging is enabled, the guest can still read and write the
DBG*_EL1 registers but they don't have any effect.
Signed-off-by: Francesco Cagnin <fcagnin@quarkslab.com>
Message-id: 20230601153107.81955-5-fcagnin@quarkslab.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Required for guest debugging. The code has been structured like the KVM
counterpart.
Signed-off-by: Francesco Cagnin <fcagnin@quarkslab.com>
Message-id: 20230601153107.81955-4-fcagnin@quarkslab.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Required for guest debugging.
Signed-off-by: Francesco Cagnin <fcagnin@quarkslab.com>
Message-id: 20230601153107.81955-3-fcagnin@quarkslab.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These helpers will be also used for HVF. Aside from reformatting a
couple of comments for 'checkpatch.pl' and updating meson to compile
'hyp_gdbstub.c', this is just code motion.
Signed-off-by: Francesco Cagnin <fcagnin@quarkslab.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230601153107.81955-2-fcagnin@quarkslab.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
qatomic_mb_read and qatomic_mb_set were the very first atomic primitives
introduced for QEMU; their semantics are unclear and they provide a false
sense of safety.
The last use of qatomic_mb_read() has been removed, so delete it.
qatomic_mb_set() instead can survive as an optimized
qatomic_set()+smp_mb(), similar to Linux's smp_store_mb(), but
rename it to qatomic_set_mb() to match the order of the two
operations.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This had been pulled in via exec/translator.h,
but the include of exec-all.h will be removed.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This had been pulled in via exec/exec-all.h, via exec/translator.h,
but the include of exec-all.h will be removed.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Move most includes from *translate*.c to translate.h, ensuring
that we get the ordering correct. Ensure cpu.h is first.
Use disas/disas.h instead of exec/log.h.
Drop otherwise unused includes.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Move most includes from *translate*.c to translate.h, ensuring
that we get the ordering correct. Ensure cpu.h is first.
Use disas/disas.h instead of exec/log.h.
Drop otherwise unused includes.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Now that gen_icount_io_start() is a simple wrapper to
translator_io_start(), inline it.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230602095439.48102-1-philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
New wrapper around gen_io_start which takes care of the USE_ICOUNT
check, as well as marking the DisasContext to end the TB.
Remove exec/gen-icount.h.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
From this remove, it's no longer clear what this is attempting
to protect. The last time a use of this define was added to
the source tree, as opposed to merely moved around, was 2008.
There have been many cleanups since that time and this is
no longer required for the build to succeed.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This will enable replacement of TARGET_INSN_START_WORDS in tcg.c.
Split out "tcg/insn-start-words.h" and use it in target/.
Reviewed-by: Anton Johansson <anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Fixes an assert in tcg_gen_code that we don't accidentally
eliminate an insn_start during optimization.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Create helper-proto-common.h without the target specific portion.
Use that in tcg-op-common.h. Include helper-proto.h in target/arm
and target/hexagon before helper-info.c.inc; all other targets are
already correct in this regard.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Create helper-gen-common.h without the target specific portion.
Use that in tcg-op-common.h. Reorg headers in target/arm to
ensure that helper-gen.h is included before helper-info.c.inc.
All other targets are already correct in this regard.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
In preparation for compiling tcg/ only once, eliminate
the all_helpers array. Instantiate the info structs for
the generic helpers in accel/tcg/, and the structs for
the target-specific helpers in each translate.c.
Since we don't see all of the info structs at startup,
initialize at first use, using g_once_init_* to make
sure we don't race while doing so.
Reviewed-by: Anton Johansson <anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This had been included via tcg-op-common.h via tcg-op.h,
but that is going away. In idef-parser.y, shuffle some
tcg related includes into a more logical order.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This had been included via tcg-op-common.h via tcg-op.h,
but that is going away.
It is needed for inlines within translator.h, so we might as well
do it there and not individually in each translator c file.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
These two items are the last uses of TARGET_LONG_BITS within tcg.h,
and are more in common with the other "_tl" definitions within that file.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Move a use of TARGET_LONG_BITS out of tcg/tcg.h.
Include the new file only where required.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The symbol is always defined, even if to 0. We wanted to test for
TCG_OVERSIZED_GUEST == 0.
This fixed, the #error is reached while building arm-softmmu, because
TCG_OVERSIZED_GUEST is not true (nor supposed to be true) for arm32
guest on a 32-bit host. But that's ok, because this feature doesn't
apply to arm32. Add an #ifdef for TARGET_AARCH64.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This had been pulled in from exec/cpu_ldst.h, via exec/exec-all.h,
but the include of tcg.h will be removed.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This had been pulled in from tcg/tcg.h, via exec/cpu_ldst.h,
via exec/exec-all.h, but the include of tcg.h will be removed.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This had been pulled in from tcg/tcg.h, via exec/cpu_ldst.h,
via exec/exec-all.h, but the include of tcg.h will be removed.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
In qemu-user-s390x, /proc/cpuinfo contains:
processor 0: version = 00, identification = 000000, machine = 8561
processor 1: version = 00, identification = 400000, machine = 8561
The highest nibble is supposed to contain the CPU address, but it's off
by 2 bits. Fix the shift value and provide a symbolic constant for it.
With the fix we get:
processor 0: version = 00, identification = 000000, machine = 8561
processor 1: version = 00, identification = 100000, machine = 8561
Fixes: 076d4d39b6 ("s390x/cpumodel: wire up cpu type + id for TCG")
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-Id: <20230605113950.1169228-2-iii@linux.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
These instructions multiply 64 bits by 64 bits, not 128 bits by 64 bits.
Reported-by: Tulio Magno Quites Machado Filho <tuliom@redhat.com>
Fixes: 2b91240f95 ("target/s390x: Use Int128 for passing float128")
Cc: qemu-stable@nongnu.org
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2211472
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-Id: <20230601223027.795501-2-iii@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
LCBB is supposed to overwrite only the bottom 32 bits, but QEMU
erroneously overwrites the entire register.
Fixes: 6d9303322e ("s390x/tcg: Implement LOAD COUNT TO BLOCK BOUNDARY")
Cc: qemu-stable@nongnu.org
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-Id: <20230526181240.1425579-2-iii@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
We currently need to select ARM_V7M unconditionally when TCG is
present in the build because some translate.c helpers and the whole of
m_helpers.c are not yet under CONFIG_ARM_V7M.
Suggested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20230523180525.29994-2-farosas@suse.de
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For M-profile, there is no guest-facing A-profile format FSR, but we
still use the env->exception.fsr field to pass fault information from
the point where a fault is raised to the code in
arm_v7m_cpu_do_interrupt() which interprets it and sets the M-profile
specific fault status registers. So it doesn't matter whether we
fill in env->exception.fsr in the short format or the LPAE format, as
long as both sides agree. As it happens arm_v7m_cpu_do_interrupt()
assumes short-form.
In compute_fsr_fsc() we weren't explicitly choosing short-form for
M-profile, but instead relied on it falling out in the wash because
arm_s1_regime_using_lpae_format() would be false. This was broken in
commit 452c67a4 when we added v8R support, because we said "PMSAv8 is
always LPAE format" (as it is for v8R), forgetting that we were
implicitly using this code path on M-profile. At that point we would
hit a g_assert_not_reached():
ERROR:../../target/arm/internals.h:549:arm_fi_to_lfsc: code should not be reached
#7 0x0000555555e055f7 in arm_fi_to_lfsc (fi=0x7fffecff9a90) at ../../target/arm/internals.h:549
#8 0x0000555555e05a27 in compute_fsr_fsc (env=0x555557356670, fi=0x7fffecff9a90, target_el=1, mmu_idx=1, ret_fsc=0x7fffecff9a1c)
at ../../target/arm/tlb_helper.c:95
#9 0x0000555555e05b62 in arm_deliver_fault (cpu=0x555557354800, addr=268961344, access_type=MMU_INST_FETCH, mmu_idx=1, fi=0x7fffecff9a90)
at ../../target/arm/tlb_helper.c:132
#10 0x0000555555e06095 in arm_cpu_tlb_fill (cs=0x555557354800, address=268961344, size=1, access_type=MMU_INST_FETCH, mmu_idx=1, probe=false, retaddr=0)
at ../../target/arm/tlb_helper.c:260
The specific assertion changed when commit fcc7404eff added
"assert not M-profile" to arm_is_secure_below_el3(), because the
conditions being checked in compute_fsr_fsc() include
arm_el_is_aa64(), which will end up calling arm_is_secure_below_el3()
and asserting before we try to call arm_fi_to_lfsc():
#7 0x0000555555efaf43 in arm_is_secure_below_el3 (env=0x5555574665a0) at ../../target/arm/cpu.h:2396
#8 0x0000555555efb103 in arm_is_el2_enabled (env=0x5555574665a0) at ../../target/arm/cpu.h:2448
#9 0x0000555555efb204 in arm_el_is_aa64 (env=0x5555574665a0, el=1) at ../../target/arm/cpu.h:2509
#10 0x0000555555efbdfd in compute_fsr_fsc (env=0x5555574665a0, fi=0x7fffecff99e0, target_el=1, mmu_idx=1, ret_fsc=0x7fffecff996c)
Avoid the assertion and the incorrect FSR format selection by
explicitly making M-profile use the short-format in this function.
Fixes: 452c67a427 ("target/arm: Enable TTBCR_EAE for ARMv8-R AArch32")a
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1658
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230523131726.866635-1-peter.maydell@linaro.org
POWER9 DD2.1 and earlier had significant limitations when running KVM,
including lack of "mixed mode" MMU support (ability to run HPT and RPT
mode on threads of the same core), and a translation prefetch issue
which is worked around by disabling "AIL" mode for the guest.
These processors are not widely available, and it's difficult to deal
with all these quirks in qemu +/- KVM, so create a POWER9 DD2.2 CPU
and make it the default POWER9 CPU.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Frederic Barrat <fbarrat@linux.ibm.com>
Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Message-Id: <20230515160201.394587-1-npiggin@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Instead of computing an artificial "class" bitmask then converting that
to the fprf value, compute the final value from the start.
Reorder the tests to check the most likely cases first.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20230523202507.688859-1-richard.henderson@linaro.org>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
QEMU's PVR value for POWER9 DD2.0 has chip type 1, which is the SMT4
"small core" type that OpenPOWER processors use. QEMU's PVR for all
other POWER9/10 have chip type 0, which "enterprise" systems use.
The difference does not really matter to QEMU (because it does not care
about SMT mode in the target), but for consistency all PVRs should use
the same chip type. We'll go with the SMT4 OpenPOWER type.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20230515160131.394562-1-npiggin@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The behaviour of the Address Translation Mode on Interrupt resource is
not consistently supported by all CPU versions or all KVM versions: KVM
HV does not support mode 2, and does not support mode 3 on POWER7 or
early POWER9 processesors. KVM PR only supports mode 0. TCG supports all
modes (0, 2, 3) on CPUs with support for the corresonding LPCR[AIL] mode.
This leads to inconsistencies in guest behaviour and could cause problems
migrating guests.
This was not noticable for Linux guests for a long time because the
kernel only uses modes 0 and 3, and it used to consider AIL-3 to be
advisory in that it would always keep the AIL-0 vectors around, so it
did not matter whether or not interrupts were delivered according to
the AIL mode. Recent Linux guests depend on AIL mode 3 working as
specified in order to support the SCV facility interrupt. If AIL-3 can
not be provided, then H_SET_MODE must return an error to Linux so it can
disable the SCV facility (failure to do so can lead to userspace being
able to crash the guest kernel).
Add the ail-mode-3 capability to specify that AIL-3 is supported. AIL-0
is implied as the baseline, and AIL-2 is no longer supported by spapr.
AIL-2 is not known to be used by any software, but support in TCG could
be restored with an ail-mode-2 capability quite easily if a regression
is reported.
Modify the H_SET_MODE Address Translation Mode on Interrupt resource
handler to check capabilities and correctly return error if not
supported.
KVM has a cap to advertise support for AIL-3.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20230515160216.394612-1-npiggin@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
This optional behavior was removed from the ISA in v3.0, see
Summary of Changes preface:
Data Storage Interrupt Status Register for Alignment Interrupt:
Simplifies the Alignment interrupt by remov- ing the Data Storage
Interrupt Status Register (DSISR) from the set of registers modified
by the Alignment interrupt.
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20230515092655.171206-5-npiggin@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>