Allow both user-only and system mode to run pa2.0 cpus.
Avoid creating a separate qemu-system-hppa64 binary;
force the qemu-hppa binary to use TARGET_ABI32.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
There is no support for hppa64 in gdb. Any attempt to provide the
data for the larger hppa64 registers results in an error from gdb.
Mask CR_SAR writes to the width of the register: 5 or 6 bits.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Hoist the resolution of d up one level above do_unit_cond.
All computations are logical, and are simplified by using a mask of the
correct width, after which the result may be compared with zero.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Hoist the resolution of d up one level above do_sed_cond.
The MOVB comparison and the existing shift/extract/deposit
are all 32-bit.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The sar shift amount register is limited to 5 bits when running
a 32-bit CPU. Strip off the remaining bits.
The interesting part is, that this register allows to detect at runtime
if a physical CPU is capable to execute PA2.0 (64-bit) instructions.
Signed-off-by: Helge Deller <deller@gmx.de>
We need to make sure the link is masked properly along the
use_nullify_skip path. The other three settings of a link
register already use this.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This will be how we ensure that the IAOQ is always
valid per PSW.W, therefore all stores to these two
variables must be done with this function.
Use third argument -1 if the destination is always dynamic,
and fourth argument NULL if the destination is always static.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
In form_gva and cpu_get_tb_cpu_state, we must truncate when PSW_W == 0.
In space_select, the bits that choose the space depend on PSW_W.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Dump all 64 bits for pa2.0 and low 32 bits for pa1.x.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
With pa2.0, absolute addresses are not the same as physical addresses,
and undergo a transformation based on PSW_W.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
With 64-bit registers, there are 16 carry bits in the PSW.
Clear reserved bits based on cpu revision.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Prepare for the qemu binary supporting both pa10 and pa20
at the same time.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Select the proper carry bit for input to the arithmetic
and for output for the condition.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This instruction always uses the input carry from bit 32,
but produces all 16 output carry bits.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The destination is TCGv_i32, so use tcg_gen_qemu_ld_i32
not tcg_gen_qemu_ld_reg.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Replace with tcg_temp_new_tl without recording into ctx.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Replace with tcg_temp_new without recording into ctx.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Complete the data structure conversion started earlier. This reduces
the perf overhead of hppa_get_physical_address from ~5% to ~0.25%.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
No need to trigger the large_page_mask code unnecessarily.
Drop the now unused HPPATLBEntry.page_size field.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Replace the va_b and va_b fields with the interval tree node.
The actual interval tree is not yet used.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Use a separate mmu index for PSW_P enabled vs disabled.
This means we can elide the tlb flush in cpu_hppa_put_psw
when PSW_P changes. This turns out to be the majority
of all tlb flushes.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Selected bugfixes for mainline and stable, especially to the per-vCPU
local APIC vector delivery mode for event channel notifications, which
was broken in a number of ways.
The xen-block driver has been defaulting to the wrong protocol for x86
guest, and this fixes that — which is technically an incompatible change
but I'm fairly sure nobody relies on the broken behaviour (and in
production I *have* seen guests which rely on the correct behaviour,
which now matches the blkback driver in the Linux kernel).
A handful of other simple fixes for issues which came to light as new
features (qv) were being developed.
-----BEGIN PGP SIGNATURE-----
iQJIBAABCAAyFiEEvgfZ/VSAmrLEsP9fY3Ys2mfi81kFAmVIvv4UHGR3bXcyQGlu
ZnJhZGVhZC5vcmcACgkQY3Ys2mfi81nFmRAAvK3VNuGDV56TJqFdtEWD+3jzSZU0
CoL1mxggvwnlFn1SdHvbC5jl+UscknErcNbqlxMTTg9jQiiQqzFuaWujJnL0dEOY
RJiS2scKln/1gv9NRbLE31FjPwoNz+zJI/iMvdutjT7Ll//v34jY0vd1Y5Wo53ay
MBschuuxD1sUUTHNj5f9afrgZaetJfgBSNZraiLR5T2HEadJVJuhItdGxW1+KaPI
zBIcflIeZmJl9b/L1a2bP3KJmRo8QzHB56X3uzwkPhYhYSU2dnCaJTLCkiNfK+Qh
SgCBMlzsvJbIZqDA9YPOGdKK1ArfTJRmRDwAkqH0YQknQGoIkpN+7eQiiSv6PMS5
U/93V7r6MfaftIs6YdWSnFozWeBuyKZL9H2nAXqZgL5t6uEMVR8Un/kFnGfslTFY
9gQ1o4IM6ECLiXhIP/sPNOprrbFb0HU7QPtEDJOxrJzBM+IfLbldRHn4p9CccqQA
LHvJF98VhX1d0nA0iZBT3qqfKPbmUhRV9Jrm+WamqNrRXhiGdF8EidsUf8RWX+JD
xZWJiqhTwShxdLE6TC/JgFz4cQCVHG8QiZstZUbdq59gtz9YO5PGByMgI3ds7iNQ
lGXAPFm+1wU85W4dZOH7qyim6d9ytFm2Fm110BKM8l9B6UKEuKHpsxXMqdo65JXI
7uBKbVpdPKul0DY=
=dQ7h
-----END PGP SIGNATURE-----
Merge tag 'pull-xenfv-stable-20231106' of git://git.infradead.org/users/dwmw2/qemu into staging
Bugfixes for emulated Xen support
Selected bugfixes for mainline and stable, especially to the per-vCPU
local APIC vector delivery mode for event channel notifications, which
was broken in a number of ways.
The xen-block driver has been defaulting to the wrong protocol for x86
guest, and this fixes that — which is technically an incompatible change
but I'm fairly sure nobody relies on the broken behaviour (and in
production I *have* seen guests which rely on the correct behaviour,
which now matches the blkback driver in the Linux kernel).
A handful of other simple fixes for issues which came to light as new
features (qv) were being developed.
# -----BEGIN PGP SIGNATURE-----
#
# iQJIBAABCAAyFiEEvgfZ/VSAmrLEsP9fY3Ys2mfi81kFAmVIvv4UHGR3bXcyQGlu
# ZnJhZGVhZC5vcmcACgkQY3Ys2mfi81nFmRAAvK3VNuGDV56TJqFdtEWD+3jzSZU0
# CoL1mxggvwnlFn1SdHvbC5jl+UscknErcNbqlxMTTg9jQiiQqzFuaWujJnL0dEOY
# RJiS2scKln/1gv9NRbLE31FjPwoNz+zJI/iMvdutjT7Ll//v34jY0vd1Y5Wo53ay
# MBschuuxD1sUUTHNj5f9afrgZaetJfgBSNZraiLR5T2HEadJVJuhItdGxW1+KaPI
# zBIcflIeZmJl9b/L1a2bP3KJmRo8QzHB56X3uzwkPhYhYSU2dnCaJTLCkiNfK+Qh
# SgCBMlzsvJbIZqDA9YPOGdKK1ArfTJRmRDwAkqH0YQknQGoIkpN+7eQiiSv6PMS5
# U/93V7r6MfaftIs6YdWSnFozWeBuyKZL9H2nAXqZgL5t6uEMVR8Un/kFnGfslTFY
# 9gQ1o4IM6ECLiXhIP/sPNOprrbFb0HU7QPtEDJOxrJzBM+IfLbldRHn4p9CccqQA
# LHvJF98VhX1d0nA0iZBT3qqfKPbmUhRV9Jrm+WamqNrRXhiGdF8EidsUf8RWX+JD
# xZWJiqhTwShxdLE6TC/JgFz4cQCVHG8QiZstZUbdq59gtz9YO5PGByMgI3ds7iNQ
# lGXAPFm+1wU85W4dZOH7qyim6d9ytFm2Fm110BKM8l9B6UKEuKHpsxXMqdo65JXI
# 7uBKbVpdPKul0DY=
# =dQ7h
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 06 Nov 2023 18:25:02 HKT
# gpg: using RSA key BE07D9FD54809AB2C4B0FF5F63762CDA67E2F359
# gpg: issuer "dwmw2@infradead.org"
# gpg: Good signature from "David Woodhouse <dwmw2@infradead.org>" [unknown]
# gpg: aka "David Woodhouse <dwmw2@exim.org>" [unknown]
# gpg: aka "David Woodhouse <david@woodhou.se>" [unknown]
# gpg: aka "David Woodhouse <dwmw2@kernel.org>" [unknown]
# gpg: WARNING: The key's User ID is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: BE07 D9FD 5480 9AB2 C4B0 FF5F 6376 2CDA 67E2 F359
* tag 'pull-xenfv-stable-20231106' of git://git.infradead.org/users/dwmw2/qemu:
hw/xen: use correct default protocol for xen-block on x86
hw/xen: take iothread mutex in xen_evtchn_reset_op()
hw/xen: fix XenStore watch delivery to guest
hw/xen: don't clear map_track[] in xen_gnttab_reset()
hw/xen: select kernel mode for per-vCPU event channel upcall vector
i386/xen: fix per-vCPU upcall vector for Xen emulation
i386/xen: Don't advertise XENFEAT_supervisor_mode_kernel
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Using a mask instead of the number of PMU devices supports the accurate
emulation of platforms that have a discontinuous set of PMU counters.
The "pmu-num" property now generates a warning when used by the user on
the command line.
Rather than storing the value for "pmu-num" convert it directly to the
mask if it is specified (overwriting the default "pmu-mask" value)
likewise the value is calculated from the mask if the property value is
obtained.
In the unusual situation that both "pmu-mask" and "pmu-num" are provided
then then the order on the command line determines which takes
precedence (later overwriting earlier.)
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20231031154000.18134-5-rbradford@rivosinc.com>
[Changes by AF
- Fixup ext_zihpm logic after rebase
]
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
During the FDT generation use the existing mask containing the enabled
counters rather then generating a new one. Using the existing mask will
support the use of discontinuous counters.
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Message-ID: <20231031154000.18134-4-rbradford@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Check the PMU available bitmask when checking if a counter is valid
rather than comparing the index against the number of PMUs.
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Message-ID: <20231031154000.18134-3-rbradford@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
More closely follow the QEMU style by returning an Error and propagating
it there is an error relating to the PMU setup.
Further simplify the function by removing the num_counters parameter as
this is available from the passed in cpu pointer.
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Message-ID: <20231031154000.18134-2-rbradford@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Set the Ibex CPU priv to 1.12.0 to ensure that smepmp/epmp is correctly
enabled.
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20231102003424.2003428-3-alistair.francis@wdc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Because the vector crypto specification is ratified, so move theses
extensions from riscv_cpu_experimental_exts to riscv_cpu_extensions.
Signed-off-by: Max Chou <max.chou@sifive.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20231026151828.754279-11-max.chou@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Expose the properties of ShangMi Algorithm Suite related extensions
(Zvks, Zvksc, Zvksg).
Signed-off-by: Max Chou <max.chou@sifive.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20231026151828.754279-10-max.chou@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Vector crypto spec defines the ShangMi algorithm suite related
extensions (Zvks, Zvksc, Zvksg) combined by several vector crypto
extensions.
Signed-off-by: Max Chou <max.chou@sifive.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20231026151828.754279-9-max.chou@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Expose the properties of NIST Algorithm Suite related extensions (Zvkn,
Zvknc, Zvkng).
Signed-off-by: Max Chou <max.chou@sifive.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20231026151828.754279-8-max.chou@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Vector crypto spec defines the NIST algorithm suite related extensions
(Zvkn, Zvknc, Zvkng) combined by several vector crypto extensions.
Signed-off-by: Max Chou <max.chou@sifive.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20231026151828.754279-7-max.chou@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Max Chou <max.chou@sifive.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20231026151828.754279-6-max.chou@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The Zvkb extension is a proper subset of the Zvbb extension and includes
following instructions:
* vandn.[vv,vx]
* vbrev8.v
* vrev8.v
* vrol.[vv,vx]
* vror.[vv,vx,vi]
Signed-off-by: Max Chou <max.chou@sifive.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20231026151828.754279-5-max.chou@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
After vector crypto spec v1.0.0-rc3 release, the Zvkb extension is
defined as a proper subset of the Zvbb extension. And both the Zvkn and
Zvks shorthand extensions replace the included Zvbb extension by Zvkb
extnesion.
Signed-off-by: Max Chou <max.chou@sifive.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20231026151828.754279-4-max.chou@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Max Chou <max.chou@sifive.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20231026151828.754279-3-max.chou@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Vector crypto spec defines the Zvkt extension that included all of the
instructions of Zvbb & Zvbc extensions and some vector instructions.
Signed-off-by: Max Chou <max.chou@sifive.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20231026151828.754279-2-max.chou@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The CSR register mseccfg is used by multiple extensions: Smepm and Zkr.
Consider this when checking the existence of the register.
Fixes: 77442380ec ("target/riscv: rvk: add CSR support for Zkr")
Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20231030102105.19501-1-heinrich.schuchardt@canonical.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
These regs were added in Linux 6.6.
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20231031205150.208405-1-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Add zihpm support in the KVM driver now that QEMU supports it.
This reg was added in Linux 6.6.
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20231023153927.435083-5-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
zihpm is the Hardware Performance Counters extension described in
chapter 12 of the unprivileged spec. It describes support for 29
unprivileged performance counters, hpmcounter3-hpmcounter31.
As with zicntr, QEMU already implements zihpm before it was even an
extension. zihpm is also part of the RVA22 profile, so add it to QEMU
to complement the future profile implementation. Default it to 'true'
for all existing CPUs since it was always present in the code.
As for disabling it, there is already code in place in
target/riscv/csr.c in all predicates for these counters (ctr() and
mctr()) that disables them if cpu->cfg.pmu_num is zero. Thus, setting
cpu->cfg.pmu_num to zero if 'zihpm=false' is enough to disable the
extension.
Set cpu->pmu_avail_ctrs mask to zero as well since this is also checked
to verify if the counters exist.
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20231023153927.435083-4-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Add zicntr support in the KVM driver now that QEMU supports it.
This reg was added in Linux 6.6.
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20231023153927.435083-3-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
zicntr is the Base Counters and Timers extension described in chapter 12
of the unprivileged spec. It describes support for RDCYCLE, RDTIME and
RDINSTRET.
QEMU already implements it in TCG way before it was a discrete
extension. zicntr is part of the RVA22 profile, so let's add it to QEMU
to make the future profile implementation flag complete. Given than it
represents an already existing feature, default it to 'true' for all
CPUs.
For TCG, we need a way to disable zicntr if the user wants to. This is
done by restricting access to the CYCLE, TIME, and INSTRET counters via
the 'ctr()' predicate when we're about to access them.
Disabling zicntr happens via the command line or if its dependency,
zicsr, happens to be disabled. We'll check for zicsr during realize()
and, in case it's absent, disable zicntr. However, if the user was
explicit about having zicntr support, error out instead of disabling it.
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20231023153927.435083-2-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
As per the Priv spec: "The R, W, and X fields form a collective WARL
field for which the combinations with R=0 and W=1 are reserved."
However currently such writes are not ignored as ought to be. The
combinations with RW=01 are allowed only when the Smepmp extension
is enabled and mseccfg.MML is set.
Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20231019065705.1431868-1-mchitale@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
As per the Priv and Smepmp specifications, certain bits such as the 'L'
bit of pmp entries and mseccfg.MML can only be cleared upon reset and it
is necessary to do so to allow 'M' mode firmware to correctly reinitialize
the pmp/smpemp state across reboots. As required by the spec, also clear
the 'A' field of pmp entries.
Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20231019065644.1431798-1-mchitale@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Smepmp is a ratified extension which qemu refers to as epmp.
Rename epmp to smepmp and add it to extension list so that
it is added to the isa string.
Signed-off-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20231019065546.1431579-1-mchitale@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Use the recently added riscv_cpu_accelerator_compatible() to filter
unavailable CPUs for a given accelerator. At this moment this is the
case for a QEMU built with KVM and TCG support querying a binary running
with TCG:
qemu-system-riscv64 -S -M virt,accel=tcg -display none
-qmp tcp:localhost:1234,server,wait=off
./qemu/scripts/qmp/qmp-shell localhost:1234
(QEMU) query-cpu-model-expansion type=full model={"name":"host"}
{"error": {"class": "GenericError", "desc": "'host' CPU not available with tcg"}}
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20231018195638.211151-7-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Add an API to check if a given CPU is compatible with the current
accelerator.
This will allow query-cpu-model-expansion to work properly in conditions
where QEMU supports both accelerators (TCG and KVM), QEMU is then
launched using TCG, and the API requests information about a KVM only
CPU (e.g. 'host' CPU).
KVM doesn't have such restrictions and, at least in theory, all CPUs
models should work with KVM. We will revisit this API in case we decide
to restrict the amount of KVM CPUs we support.
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20231018195638.211151-6-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Callers can add 'props' when querying for a cpu model expansion to see
if a given CPU model supports a certain criteria, and what's the
resulting CPU object.
If we have 'props' to handle, gather it in a QDict and use the new
riscv_cpuobj_validate_qdict_in() helper to validate it. This helper will
add the custom properties in the CPU object and validate it using
riscv_cpu_finalize_features(). Users will be aware of validation errors
if any occur, if not a CPU object with 'props' will be returned.
Here's an example with the veyron-v1 vendor CPU. Disabling vendor CPU
extensions is allowed, assuming the final config is valid. Disabling
'smstateen' is a valid expansion:
(QEMU) query-cpu-model-expansion type=full model={"name":"veyron-v1","props":{"smstateen":false}}
{"return": {"model": {"name": "veyron-v1", "props": {"zicond": false, ..., "smstateen": false, ...}
But enabling extensions isn't allowed for vendor CPUs. E.g. enabling 'V'
for the veyron-v1 CPU isn't allowed:
(QEMU) query-cpu-model-expansion type=full model={"name":"veyron-v1","props":{"v":true}}
{"error": {"class": "GenericError", "desc": "'veyron-v1' CPU does not allow enabling extensions"}}
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20231018195638.211151-5-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The query-cpu-model-expansion API is capable of passing extra properties
to a given CPU model and tell callers if this custom configuration is
valid.
The RISC-V version of the API is not quite there yet. The reason is the
realize() flow in the TCG driver, where most of the validation is done
in tcg_cpu_realizefn(). riscv_cpu_finalize_features() is then used to
validate satp_mode for both TCG and KVM CPUs.
Our ARM friends uses a concept of 'finalize_features()', a step done in
the end of realize() where the CPU features are validated. We have a
riscv_cpu_finalize_features() helper that, at this moment, is only
validating satp_mode.
Re-use this existing helper to do all CPU extension validation we
required after at the end of realize(). Make it public to allow APIs to
use it. At this moment only the TCG driver requires a realize() time
validation, thus, to avoid adding accelerator specific helpers in the
API, riscv_cpu_finalize_features() uses
riscv_tcg_cpu_finalize_features() if we are running TCG. The API will
then use riscv_cpu_finalize_features() regardless of the current
accelerator.
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20231018195638.211151-4-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
We got along without property getters in the KVM driver because we never
needed them. But the incoming query-cpu-model-expansion API will use
property getters and setters to retrieve the CPU characteristics.
Add the missing getters for the KVM driver for both MISA and
multi-letter extension properties. We're also adding an special getter
for absent multi-letter properties that KVM doesn't implement that
always return false.
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20231018195638.211151-2-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
This change adds support for inserting virtual interrupts from HS-mode
into VS-mode using hvien and hvip csrs. This also allows for IRQ filtering
from HS-mode.
Also, the spec doesn't mandate the interrupt to be actually supported
in hardware. Which allows HS-mode to assert virtual interrupts to VS-mode
that have no connection to any real interrupt events.
This is defined as part of the AIA specification [0], "6.3.2 Virtual
interrupts for VS level".
[0]: https://github.com/riscv/riscv-aia/releases/download/1.0/riscv-interrupts-1.0.pdf
Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20231016111736.28721-7-rkanwal@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
This change adds support for inserting virtual interrupts from M-mode
into S-mode using mvien and mvip csrs. IRQ filtering is a use case of
this change, i-e M-mode can stop delegating an interrupt to S-mode and
instead enable it in MIE and receive those interrupts in M-mode and then
selectively inject the interrupt using mvien and mvip.
Also, the spec doesn't mandate the interrupt to be actually supported
in hardware. Which allows M-mode to assert virtual interrupts to S-mode
that have no connection to any real interrupt events.
This is defined as part of the AIA specification [0], "5.3 Interrupt
filtering and virtual interrupts for supervisor level".
[0]: https://github.com/riscv/riscv-aia/releases/download/1.0/riscv-interrupts-1.0.pdf
Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20231016111736.28721-6-rkanwal@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
This is to allow virtual interrupts to be inserted into S and VS
modes. Given virtual interrupts will be maintained in separate
mvip and hvip CSRs, riscv_cpu_update_mip will no longer be in the
path and interrupts need to be triggered for these cases from
rmw_hvip64 and rmw_mvip64 functions.
Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20231016111736.28721-5-rkanwal@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
With H-Ext supported, VS bits are all hardwired to one in MIDELEG
denoting always delegated interrupts. This is being done in rmw_mideleg
but given mideleg is used in other places when routing interrupts
this change initializes it in riscv_cpu_realize to be on the safe side.
Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20231016111736.28721-4-rkanwal@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
RISCV_EXCP_SEMIHOST is set to 0x10, which can be a local interrupt id
as well. This change moves RISCV_EXCP_SEMIHOST to switch case so that
async flag check is performed before invoking semihosting logic.
Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20231016111736.28721-3-rkanwal@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Add a leading 'z' to improve grepping. When one wants to search for uses
of zicboz they're more likely to do 'grep -i zicboz' than 'grep -i
icboz'.
Suggested-by: Andrew Jones <ajones@ventanamicro.com>
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Message-ID: <20231012164604.398496-5-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Add a leading 'z' to improve grepping. When one wants to search for uses
of zicbom they're more likely to do 'grep -i zicbom' than 'grep -i
icbom'.
Suggested-by: Andrew Jones <ajones@ventanamicro.com>
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Message-ID: <20231012164604.398496-4-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Add a leading 'z' to improve grepping. When one wants to search for uses
of zicsr they're more likely to do 'grep -i zicsr' than 'grep -i icsr'.
Suggested-by: Andrew Jones <ajones@ventanamicro.com>
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Message-ID: <20231012164604.398496-3-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Add a leading 'z' to improve grepping. When one wants to search for uses
of zifencei they're more likely to do 'grep -i zifencei' than 'grep -i
ifencei'.
Suggested-by: Andrew Jones <ajones@ventanamicro.com>
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Message-ID: <20231012164604.398496-2-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
In commit be23a049 in the conversion to decodetree we broke the
decoding of the immediate value in the LDRA instruction. This should
be a 10 bit signed value that is scaled by 8, but in the conversion
we incorrectly ended up scaling it only by 2. Fix the scaling
factor.
Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1970
Fixes: be23a049 ("target/arm: Convert load (pointer auth) insns to decodetree")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20231106113445.1163063-1-peter.maydell@linaro.org
A guest which has configured the per-vCPU upcall vector may set the
HVM_PARAM_CALLBACK_IRQ param to fairly much anything other than zero.
For example, Linux v6.0+ after commit b1c3497e604 ("x86/xen: Add support
for HVMOP_set_evtchn_upcall_vector") will just do this after setting the
vector:
/* Trick toolstack to think we are enlightened. */
if (!cpu)
rc = xen_set_callback_via(1);
That's explicitly setting the delivery to GSI#1, but it's supposed to be
overridden by the per-vCPU vector setting. This mostly works in Qemu
*except* for the logic to enable the in-kernel handling of event channels,
which falsely determines that the kernel cannot accelerate GSI delivery
in this case.
Add a kvm_xen_has_vcpu_callback_vector() to report whether vCPU#0 has
the vector set, and use that in xen_evtchn_set_callback_param() to
enable the kernel acceleration features even when the param *appears*
to be set to target a GSI.
Preserve the Xen behaviour that when HVM_PARAM_CALLBACK_IRQ is set to
*zero* the event channel delivery is disabled completely. (Which is
what that bizarre guest behaviour is working round in the first place.)
Cc: qemu-stable@nongnu.org
Fixes: 91cce75617 ("hw/xen: Add xen_evtchn device for event channel emulation")
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
The per-vCPU upcall vector support had three problems. Firstly it was
using the wrong hypercall argument and would always return -EFAULT when
the guest tried to set it up. Secondly it was using the wrong ioctl() to
pass the vector to the kernel and thus the *kernel* would always return
-EINVAL. Finally, even when delivering the event directly from userspace
with an MSI, it put the destination CPU ID into the wrong bits of the
MSI address.
Linux doesn't (yet) use this mode so it went without decent testing
for a while.
Cc: qemu-stable@nongnu.org
Fixes: 105b47fdf2 ("i386/xen: implement HVMOP_set_evtchn_upcall_vector")
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
This confuses lscpu into thinking it's running in PVH mode.
Cc: qemu-stable@nongnu.org
Fixes: bedcc13924 ("i386/xen: implement HYPERVISOR_xen_version")
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
Consolidate the test here; drop the "inverted logic".
Fix MOVr and FMOVR, which were missing the invalid test.
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
If the insn raises no exceptions, there will be no path in which
cpu_cond is used, and so the computation may be optimized away.
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Use the original condition instead of consuming cpu_cond,
which will now only be live along exception paths.
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Handle these via TCG_COND_{ALWAYS,NEVER}.
Allow dc->npc to be variable, using gen_mov_pc_npc.
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The function had only one caller. Canonicalize the cpu_cond
test to TCG_COND_NE, the "natural" sense of its value.
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Do this here instead of in each caller.
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This will allow the condition to live across changes to
the global cc variables.
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We don't require c2 to be variable, so emphasize that.
We don't currently require c2 to be non-zero, but that will change.
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Since we're going to feed cpu_cond to another comparison, we don't
reqire a boolean value -- anything non-zero is sufficient.
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
All instructions have been converted to generate
full condition codes explicitly.
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
These are all related and implementable with common code.
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
These are all related and implementable with common code.
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Return both result and overflow from helper_[us]div.
Compute all flags explicitly in gen_op_[us]divcc.
Marginally improve the INT64_MIN special case in helper_sdiv.
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Step in removing CC_OP: change the representation of CC_OP_FLAGS.
The 8 bits are distributed between 6 variables, which should make
it easy to keep up to date.
The code within cc_helper.c is quite ugly but is only temporary.
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Isolate linux-user from changes to icc representation.
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20231101030816.2353416-2-gaosong@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Add support for the query-cpu-model-expansion QMP command to LoongArch.
We support query the cpu features.
e.g
la464 and max cpu support LSX/LASX, default enable,
la132 not support LSX/LASX.
1. start with '-cpu max,lasx=off'
(QEMU) query-cpu-model-expansion type=static model={"name":"max"}
{"return": {"model": {"name": "max", "props": {"lasx": false, "lsx": true}}}}
2. start with '-cpu la464,lasx=off'
(QEMU) query-cpu-model-expansion type=static model={"name":"la464"}
{"return": {"model": {"name": "max", "props": {"lasx": false, "lsx": true}}}
3. start with '-cpu la132,lasx=off'
qemu-system-loongarch64: can't apply global la132-loongarch-cpu.lasx=off: Property 'la132-loongarch-cpu.lasx' not found
4. start with '-cpu max,lasx=off' or start with '-cpu la464,lasx=off' query cpu model la132
(QEMU) query-cpu-model-expansion type=static model={"name":"la132"}
{"return": {"model": {"name": "la132"}}}
Acked-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20231020084925.3457084-4-gaosong@loongson.cn>
Some users may not need LSX/LASX, this patch allows the user
enable/disable LSX/LASX features.
e.g
'-cpu max,lsx=on,lasx=on' (default);
'-cpu max,lsx=on,lasx=off' (enabled LSX);
'-cpu max,lsx=off,lasx=on' (enabled LASX, LSX);
'-cpu max,lsx=off' (disable LSX and LASX).
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20231020084925.3457084-3-gaosong@loongson.cn>
We use cpu la464 for the 'max' cpu.
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20231020084925.3457084-2-gaosong@loongson.cn>
In a two-stage translation, the result of the BTI guarded bit should
be the guarded bit from the first stage of translation, as there is
no BTI guard information in stage two. Our code tried to do this,
but got it wrong, because we currently have two fields where the GP
bit information might live (ARMCacheAttrs::guarded and
CPUTLBEntryFull::extra::arm::guarded), and we were storing the GP bit
in the latter during the stage 1 walk but trying to copy the former
in combine_cacheattrs().
Remove the duplicated storage, and always use the field in
CPUTLBEntryFull; correctly propagate the stage 1 value to the output
in get_phys_addr_twostage().
Note for stable backports: in v8.0 and earlier the field is named
result->f.guarded, not result->f.extra.arm.guarded.
Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1950
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20231031173723.26582-1-peter.maydell@linaro.org
The previous change missed updating one of the increments and
one of the MemOps. Add a test case for all vector lengths.
Cc: qemu-stable@nongnu.org
Fixes: e6dd5e782b ("target/arm: Use tcg_gen_qemu_{ld, st}_i128 in gen_sve_{ld, st}r")
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20231031143215.29764-1-richard.henderson@linaro.org
[PMM: fixed checkpatch nit]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Most of the registers used by the FEAT_MOPS instructions cannot use
31 as a register field value; this is CONSTRAINED UNPREDICTABLE to
NOP or UNDEF (we UNDEF). However, it is permitted for the "source
value" register for the memset insns SET* to be 31, which (as usual
for most data-processing insns) means it should be the zero register
XZR. We forgot to handle this case, with the effect that trying to
set memory to zero with a "SET* Xd, Xn, XZR" sets the memory to
the value that happens to be in the low byte of SP.
Handle XZR when getting the SET* data value from the register file.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20231030174000.3792225-4-peter.maydell@linaro.org
In user-mode emulation, we need to set the SCTLR_EL1.MSCEn
bit to avoid all the FEAT_MOPS insns UNDEFing.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20231030174000.3792225-2-peter.maydell@linaro.org
Specifically DIT, LSE2, and MTE3.
We already expose detection of these via the CPUID interface, but
missed these from ELF hwcaps.
Signed-off-by: Marielle Novastrider <marielle@novastrider.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20231029210058.38986-1-marielle@novastrider.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: fixed conflict with feature tests moving to cpu-features.h]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* Correct minor errors in Cortex-A710 definition
* Implement Neoverse N2 CPU model
* Refactor feature test functions out into separate header
* Fix syndrome for FGT traps on ERET
* Remove 'hw/arm/boot.h' includes from various header files
* pxa2xx: Refactoring/cleanup
* Avoid using 'first_cpu' when first ARM CPU is reachable
* misc/led: LED state is set opposite of what is expected
* hw/net/cadence_gen: clean up to use FIELD macros
* hw/net/cadence_gem: perform PHY access on write only
* hw/net/cadence_gem: enforce 32 bits variable size for CRC
-----BEGIN PGP SIGNATURE-----
iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmU7yz0ZHHBldGVyLm1h
eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3n4xEACK4ti+PFSJHVCQ69NzLLBT
ybFGFMsMhXJTSNS30Pzs+KWCKWPP59knYBD4qO43W1iV6pPUhy+skr+BFCCRvBow
se74+Fm1l4LmnuHxgukJzTdvRffI3v37alLn6Y/ioWe8bDpf/IJj8WLj8B1IPoNg
fswJSGDLpPMovaz8NBQRzglUWpfyzxH+uuW779qBS1nuFdPOfIHKrocvvdrfogBP
aO8AeiBzz5STW9Naeq+BIKho8S9LinSB6FHa+rRPUDkWx03lvRIvkgGPzHpXYy8I
zAZ8gUQZyXprHAHMpnoBv8Wcw3Bwc2f+8xx8hnRRki3iBroXKfJA9NkeN0StQmL1
ZHhfYkiKSS5diIFW5pX6ZixKbXHE2a4aH4zPVUNQriNWOevhe7n82mAPNFIYjk97
ciTtd4I2oew48sDLSodMiirGL987Mit7KC23itVGezcNfQ9FnVTDmuGy8Rq52BZm
u4TZjVBrtjQOdMBUcD2hKvXhikQNAdOhArPwNfOr0esSQL44MMEe+6Q5/Cbp0BOE
stAY/xwSP2cY5mIPnAbIBELseEZsV8ySA3M0y1iRCJptjwbyWM+s1TYz0iXcqeOn
l6LfiI6r1BqUeoWLGP4042R4FLyLNh6gU/TiFNLu7JJQjXl/EkRgqVXWYfzy2n51
KKY6iGFi5r41sAU6GIXOkQ==
=szC7
-----END PGP SIGNATURE-----
Merge tag 'pull-target-arm-20231027' of https://git-us.linaro.org/people/pmaydell/qemu-arm into staging
target-arm queue:
* Correct minor errors in Cortex-A710 definition
* Implement Neoverse N2 CPU model
* Refactor feature test functions out into separate header
* Fix syndrome for FGT traps on ERET
* Remove 'hw/arm/boot.h' includes from various header files
* pxa2xx: Refactoring/cleanup
* Avoid using 'first_cpu' when first ARM CPU is reachable
* misc/led: LED state is set opposite of what is expected
* hw/net/cadence_gen: clean up to use FIELD macros
* hw/net/cadence_gem: perform PHY access on write only
* hw/net/cadence_gem: enforce 32 bits variable size for CRC
# -----BEGIN PGP SIGNATURE-----
#
# iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmU7yz0ZHHBldGVyLm1h
# eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3n4xEACK4ti+PFSJHVCQ69NzLLBT
# ybFGFMsMhXJTSNS30Pzs+KWCKWPP59knYBD4qO43W1iV6pPUhy+skr+BFCCRvBow
# se74+Fm1l4LmnuHxgukJzTdvRffI3v37alLn6Y/ioWe8bDpf/IJj8WLj8B1IPoNg
# fswJSGDLpPMovaz8NBQRzglUWpfyzxH+uuW779qBS1nuFdPOfIHKrocvvdrfogBP
# aO8AeiBzz5STW9Naeq+BIKho8S9LinSB6FHa+rRPUDkWx03lvRIvkgGPzHpXYy8I
# zAZ8gUQZyXprHAHMpnoBv8Wcw3Bwc2f+8xx8hnRRki3iBroXKfJA9NkeN0StQmL1
# ZHhfYkiKSS5diIFW5pX6ZixKbXHE2a4aH4zPVUNQriNWOevhe7n82mAPNFIYjk97
# ciTtd4I2oew48sDLSodMiirGL987Mit7KC23itVGezcNfQ9FnVTDmuGy8Rq52BZm
# u4TZjVBrtjQOdMBUcD2hKvXhikQNAdOhArPwNfOr0esSQL44MMEe+6Q5/Cbp0BOE
# stAY/xwSP2cY5mIPnAbIBELseEZsV8ySA3M0y1iRCJptjwbyWM+s1TYz0iXcqeOn
# l6LfiI6r1BqUeoWLGP4042R4FLyLNh6gU/TiFNLu7JJQjXl/EkRgqVXWYfzy2n51
# KKY6iGFi5r41sAU6GIXOkQ==
# =szC7
# -----END PGP SIGNATURE-----
# gpg: Signature made Fri 27 Oct 2023 23:37:49 JST
# gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg: issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [full]
# gpg: aka "Peter Maydell <pmaydell@gmail.com>" [full]
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [full]
# gpg: aka "Peter Maydell <peter@archaic.org.uk>" [unknown]
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* tag 'pull-target-arm-20231027' of https://git-us.linaro.org/people/pmaydell/qemu-arm: (41 commits)
hw/net/cadence_gem: enforce 32 bits variable size for CRC
hw/net/cadence_gem: perform PHY access on write only
hw/net/cadence_gem: use FIELD to describe PHYMNTNC register fields
hw/net/cadence_gem: use FIELD to describe DESCONF6 register fields
hw/net/cadence_gem: use FIELD to describe IRQ register fields
hw/net/cadence_gem: use FIELD to describe [TX|RX]STATUS register fields
hw/net/cadence_gem: use FIELD to describe DMACFG register fields
hw/net/cadence_gem: use FIELD to describe NWCFG register fields
hw/net/cadence_gem: use FIELD to describe NWCTRL register fields
hw/net/cadence_gem: use FIELD for screening registers
hw/net/cadence_gem: use REG32 macro for register definitions
misc/led: LED state is set opposite of what is expected
hw/arm: Avoid using 'first_cpu' when first ARM CPU is reachable
hw/arm/pxa2xx: Realize PXA2XX_I2C device before accessing it
hw/intc/pxa2xx: Factor pxa2xx_pic_realize() out of pxa2xx_pic_init()
hw/intc/pxa2xx: Pass CPU reference using QOM link property
hw/intc/pxa2xx: Convert to Resettable interface
hw/pcmcia/pxa2xx: Inline pxa2xx_pcmcia_init()
hw/pcmcia/pxa2xx: Do not open-code sysbus_create_simple()
hw/pcmcia/pxa2xx: Realize sysbus device before accessing it
...
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
In commit 442c9d682c when we converted the ERET, ERETAA, ERETAB
instructions to decodetree, the conversion accidentally lost the
correct setting of the syndrome register when taking a trap because
of the FEAT_FGT HFGITR_EL1.ERET bit. Instead of reporting a correct
full syndrome value with the EC and IL bits, we only reported the low
two bits of the syndrome, because the call to syn_erettrap() got
dropped.
Fix the syndrome values for these traps by reinstating the
syn_erettrap() calls.
Fixes: 442c9d682c ("target/arm: Convert ERET, ERETAA, ERETAB to decodetree")
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20231024172438.2990945-1-peter.maydell@linaro.org
Move all the ID_AA64DFR* feature test functions together.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20231024163510.2972081-7-peter.maydell@linaro.org
Move all the ID_AA64PFR* feature test functions together.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20231024163510.2972081-6-peter.maydell@linaro.org
Move the feature test functions that test ID_AA64ISAR* fields
together.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20231024163510.2972081-5-peter.maydell@linaro.org
Move the ID_AA64MMFR0 feature test functions up so they are
before the ones for ID_AA64MMFR1 and ID_AA64MMFR2.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20231024163510.2972081-4-peter.maydell@linaro.org
Our list of isar_feature functions is not in any particular order,
but tests on fields of the same ID register tend to be grouped
together. A few functions that are tests of fields in ID_AA64MMFR1
and ID_AA64MMFR2 are not in the same place as the rest; move them
into their groups.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20231024163510.2972081-3-peter.maydell@linaro.org
The feature test functions isar_feature_*() now take up nearly
a thousand lines in target/arm/cpu.h. This header file is included
by a lot of source files, most of which don't need these functions.
Move the feature test functions to their own header file.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20231024163510.2972081-2-peter.maydell@linaro.org
Implement a model of the Neoverse N2 CPU. This is an Armv9.0-A
processor very similar to the Cortex-A710. The differences are:
* no FEAT_EVT
* FEAT_DGH (data gathering hint)
* FEAT_NV (not yet implemented in QEMU)
* Statistical Profiling Extension (not implemented in QEMU)
* 48 bit physical address range, not 40
* CTR_EL0.DIC = 1 (no explicit icache cleaning needed)
* PMCR_EL0.N = 6 (always 6 PMU counters, not 20)
Because it has 48-bit physical address support, we can use
this CPU in the sbsa-ref board as well as the virt board.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20230915185453.1871167-3-peter.maydell@linaro.org
Correct a couple of minor errors in the Cortex-A710 definition:
* ID_AA64DFR0_EL1.DebugVer is 9 (indicating Armv8.4 debug architecture)
* ID_AA64ISAR1_EL1.APA is 5 (indicating more PAuth support)
* there is an IMPDEF CPUCFR_EL1, like that on the Neoverse-N1
Fixes: e3d45c0a89 ("target/arm: Implement cortex-a710")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20230915185453.1871167-2-peter.maydell@linaro.org
This was introduced in KVM in Linux 2.6.33, we can require it
unconditionally. KVM_CLOCK_TSC_STABLE was only added in Linux 4.9,
for now do not require it (though it would allow the removal of some
pretty yucky code).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This was introduced in KVM in Linux 2.6.36, and could already be used at
the time to save/restore FPU data even on older processor. We can require
it unconditionally and stop using KVM_GET/SET_FPU.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
KVM_IRQFD was introduced in Linux 2.6.32, and since then it has always been
available on architectures that support an in-kernel interrupt controller.
We can require it unconditionally.
Reviewed-by: Manos Pitsidianakis <manos.pitsidianakis@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This was introduced in KVM in Linux 3.5, we can require it unconditionally
in kvm_irqchip_send_msi(). However, not all architectures have to implement
it so check it only in x86, the only architecture that ever had MSI injection
but not KVM_CAP_SIGNAL_MSI.
ARM uses it to detect the presence of the ITS emulation in the kernel,
introduced in Linux 4.8. Assume that it's there and possibly fail when
realizing the arm-its-kvm device.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
PAE mode in x86 supports 36 bit address space. Check the PAE CPUID on the
guest processor and set phys_bits to 36 if PAE feature is set. This is in
addition to checking the presence of PSE36 CPUID feature for setting 36 bit
phys_bits.
Signed-off-by: Ani Sinha <anisinha@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-ID: <20230912120650.371781-1-anisinha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Instructions in VEX exception class 6 generally look at the value of
VEX.W. Note that the manual places some instructions incorrectly in
class 4, for example VPERMQ which has no non-VEX encoding and no legacy
SSE analogue. AMD does a mess of its own, as documented in the comment
that this patch adds.
Most of them are checked for VEX.W=0, and are listed in the manual
(though with an omission) in table 2-16; VPERMQ and VPERMPD check for
VEX.W=1, which is only listed in the instruction description. Others,
such as VPSRLV, VPSLLV and the FMA3 instructions, use VEX.W to switch
between a 32-bit and 64-bit operation.
Fix more of the class 4/class 6 mismatches, and implement the check for
VEX.W in TCG.
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In preparation for adding more similar checks, move the VEX.L=0 check
and several X86_SPECIAL_* checks to a new field, where each bit represent
a common check on unused bits, or a restriction on the processor mode.
Likewise, many SVM intercepts can be checked during the decoding phase,
the main exception being the selective CR0 write, MSR and IOIO intercepts.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The implementation was validated with OpenSSL and with the test vectors in
https://github.com/rust-lang/stdarch/blob/master/crates/core_arch/src/x86/sha.rs.
The instructions provide a ~25% improvement on hashing a 64 MiB file:
runtime goes down from 1.8 seconds to 1.4 seconds; instruction count on
the host goes down from 5.8 billion to 4.8 billion with slightly better
IPC too. Good job Intel. ;)
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
All instructions are now converted.
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>