Commit Graph

13809 Commits

Author SHA1 Message Date
Song Gao
1c15dd632b target/loongarch/gdbstub: Add vector registers support
GDB already support LoongArch vector extension[1], QEMU gdb adds
LoongArch vector registers support, so that users can use 'info all-registers'
to get all vector registers values.

[1]: https://sourceware.org/git/?p=binutils-gdb.git;a=commitdiff;h=1e9569f383a3d5a88ee07d0c2401bd95613c222e

Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewd-by: Bibo Mao <maobibo@loongson.cn>
Message-Id: <20240711024454.3075183-1-gaosong@loongson.cn>
2024-07-19 10:40:04 +08:00
Richard Henderson
23fa74974d target-arm queue:
* Fix handling of LDAPR/STLR with negative offset
  * LDAPR should honour SCTLR_ELx.nAA
  * Use float_status copy in sme_fmopa_s
  * hw/display/bcm2835_fb: fix fb_use_offsets condition
  * hw/arm/smmuv3: Support and advertise nesting
  * Use FPST_F16 for SME FMOPA (widening)
  * tests/arm-cpu-features: Do not assume PMU availability
  * hvf: arm: Do not advance PC when raising an exception
 -----BEGIN PGP SIGNATURE-----
 
 iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmaZFlUZHHBldGVyLm1h
 eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3iJuEACtVh1Wp93XMsL3llAZkQlx
 DUCnDCvAM2qiiTIMOqPQzeKTIkRV9aFh1YWzOtMFKai6UkBU6p1b4bPqb5SIr99G
 Ayps4+WzAHsjTqBGEpIIDWL6GqMwv9azBnRAYNb+Cg9O3SzEnCdGOKCfGYTXXPRz
 zQ1NIgqZSUC5jg3XgkU22J3VMsOUWijbzxnGXhOyemSIEhREl+t6Ns3ca3n47/jk
 JIw1g6o0mpefPPkaLq6ftVwpn1L63iYQugn4VCrIhtIoOM8vmnShbI9/GwzL4AYk
 n28nwPl948Xby13kCYmu6Slt8Rmm7M33pBDJzsVtbaeBSd44XHrov8Y1+e1FhAco
 lxrWY/2rG9HiWKGLdAeCKwVxB186DKiTmuK7lcN+eBu3VbOLjDiVE0d1bK4HqGyc
 nzA/Aq81Y9p5Z7wzX40sVFlq0j1pQDQWk6GgPfMA4ueHKEEobxC3C+k1q9m02gjQ
 qesOFzViiGe0j7JER84qqcatIaTk09xfbXL/uMZx8oP/iKa1pyMUx2blChXOXVTx
 oGkO2h3/QCpRIos8d8WM/bso16EkpraInM4748iumSLuxDxTwiIikK/hpsCLDwUN
 dLsH/hAMz+yQOFubFoRt4IlsGVnk5asmTDMb4S8RojdF2KzHuzbJMgdEOe62631g
 IOAc7Tn3TIm5MpAxXOXgJA==
 =/aEm
 -----END PGP SIGNATURE-----

Merge tag 'pull-target-arm-20240718' of https://git.linaro.org/people/pmaydell/qemu-arm into staging

target-arm queue:
 * Fix handling of LDAPR/STLR with negative offset
 * LDAPR should honour SCTLR_ELx.nAA
 * Use float_status copy in sme_fmopa_s
 * hw/display/bcm2835_fb: fix fb_use_offsets condition
 * hw/arm/smmuv3: Support and advertise nesting
 * Use FPST_F16 for SME FMOPA (widening)
 * tests/arm-cpu-features: Do not assume PMU availability
 * hvf: arm: Do not advance PC when raising an exception

# -----BEGIN PGP SIGNATURE-----
#
# iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmaZFlUZHHBldGVyLm1h
# eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3iJuEACtVh1Wp93XMsL3llAZkQlx
# DUCnDCvAM2qiiTIMOqPQzeKTIkRV9aFh1YWzOtMFKai6UkBU6p1b4bPqb5SIr99G
# Ayps4+WzAHsjTqBGEpIIDWL6GqMwv9azBnRAYNb+Cg9O3SzEnCdGOKCfGYTXXPRz
# zQ1NIgqZSUC5jg3XgkU22J3VMsOUWijbzxnGXhOyemSIEhREl+t6Ns3ca3n47/jk
# JIw1g6o0mpefPPkaLq6ftVwpn1L63iYQugn4VCrIhtIoOM8vmnShbI9/GwzL4AYk
# n28nwPl948Xby13kCYmu6Slt8Rmm7M33pBDJzsVtbaeBSd44XHrov8Y1+e1FhAco
# lxrWY/2rG9HiWKGLdAeCKwVxB186DKiTmuK7lcN+eBu3VbOLjDiVE0d1bK4HqGyc
# nzA/Aq81Y9p5Z7wzX40sVFlq0j1pQDQWk6GgPfMA4ueHKEEobxC3C+k1q9m02gjQ
# qesOFzViiGe0j7JER84qqcatIaTk09xfbXL/uMZx8oP/iKa1pyMUx2blChXOXVTx
# oGkO2h3/QCpRIos8d8WM/bso16EkpraInM4748iumSLuxDxTwiIikK/hpsCLDwUN
# dLsH/hAMz+yQOFubFoRt4IlsGVnk5asmTDMb4S8RojdF2KzHuzbJMgdEOe62631g
# IOAc7Tn3TIm5MpAxXOXgJA==
# =/aEm
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 18 Jul 2024 11:19:17 PM AEST
# gpg:                using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg:                issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [full]
# gpg:                 aka "Peter Maydell <pmaydell@gmail.com>" [full]
# gpg:                 aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [full]
# gpg:                 aka "Peter Maydell <peter@archaic.org.uk>" [unknown]

* tag 'pull-target-arm-20240718' of https://git.linaro.org/people/pmaydell/qemu-arm: (26 commits)
  hvf: arm: Do not advance PC when raising an exception
  tests/arm-cpu-features: Do not assume PMU availability
  tests/tcg/aarch64: Add test cases for SME FMOPA (widening)
  target/arm: Use FPST_F16 for SME FMOPA (widening)
  target/arm: Use float_status copy in sme_fmopa_s
  hw/arm/smmu: Refactor SMMU OAS
  hw/arm/smmuv3: Support and advertise nesting
  hw/arm/smmuv3: Handle translation faults according to SMMUPTWEventInfo
  hw/arm/smmuv3: Support nested SMMUs in smmuv3_notify_iova()
  hw/arm/smmu: Support nesting in the rest of commands
  hw/arm/smmu: Introduce smmu_iotlb_inv_asid_vmid
  hw/arm/smmu: Support nesting in smmuv3_range_inval()
  hw/arm/smmu-common: Support nested translation
  hw/arm/smmu-common: Add support for nested TLB
  hw/arm/smmu-common: Rework TLB lookup for nesting
  hw/arm/smmuv3: Translate CD and TT using stage-2 table
  hw/arm/smmu: Introduce CACHED_ENTRY_TO_ADDR
  hw/arm/smmu: Consolidate ASID and VMID types
  hw/arm/smmu: Split smmuv3_translate()
  hw/arm/smmu: Use enum for SMMU stage
  ...

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-19 07:02:17 +10:00
Akihiko Odaki
30a1690f24 hvf: arm: Do not advance PC when raising an exception
hvf did not advance PC when raising an exception for most unhandled
system registers, but it mistakenly advanced PC when raising an
exception for GICv3 registers.

Cc: qemu-stable@nongnu.org
Fixes: a2260983c6 ("hvf: arm: Add support for GICv3")
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Message-id: 20240716-pmu-v3-4-8c7c1858a227@daynix.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-18 13:49:30 +01:00
Richard Henderson
207d30b5fd target/arm: Use FPST_F16 for SME FMOPA (widening)
This operation has float16 inputs and thus must use
the FZ16 control not the FZ control.

Cc: qemu-stable@nongnu.org
Fixes: 3916841ac7 ("target/arm: Implement FMOPA, FMOPS (widening)")
Reported-by: Daniyal Khan <danikhan632@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20240717060149.204788-3-richard.henderson@linaro.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2374
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-18 13:49:30 +01:00
Daniyal Khan
31d93fedf4 target/arm: Use float_status copy in sme_fmopa_s
We made a copy above because the fp exception flags
are not propagated back to the FPST register, but
then failed to use the copy.

Cc: qemu-stable@nongnu.org
Fixes: 558e956c71 ("target/arm: Implement FMOPA, FMOPS (non-widening)")
Signed-off-by: Daniyal Khan <danikhan632@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20240717060149.204788-2-richard.henderson@linaro.org
[rth: Split from a larger patch]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-18 13:49:30 +01:00
Peter Maydell
25489b521b target/arm: LDAPR should honour SCTLR_ELx.nAA
In commit c1a1f80518 when we added the FEAT_LSE2 relaxations to
the alignment requirements for atomic and ordered loads and stores,
we didn't quite get it right for LDAPR/LDAPRH/LDAPRB with no
immediate offset.  These instructions were handled in the old decoder
as part of disas_ldst_atomic(), but unlike all the other insns that
function decoded (LDADD, LDCLR, etc) these insns are "ordered", not
"atomic", so they should be using check_ordered_align() rather than
check_atomic_align().  Commit c1a1f80518 used
check_atomic_align() regardless for everything in
disas_ldst_atomic().  We then carried that incorrect check over in
the decodetree conversion, where LDAPR/LDAPRH/LDAPRB are now handled
by trans_LDAPR().

The effect is that when FEAT_LSE2 is implemented, these instructions
don't honour the SCTLR_ELx.nAA bit and will generate alignment
faults when they should not.

(The LDAPR insns with an immediate offset were in disas_ldst_ldapr_stlr()
and then in trans_LDAPR_i() and trans_STLR_i(), and have always used
the correct check_ordered_align().)

Use check_ordered_align() in trans_LDAPR().

Cc: qemu-stable@nongnu.org
Fixes: c1a1f80518 ("target/arm: Relax ordered/atomic alignment checks for LSE2")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240709134504.3500007-3-peter.maydell@linaro.org
2024-07-18 13:49:28 +01:00
Peter Maydell
5669d26ec6 target/arm: Fix handling of LDAPR/STLR with negative offset
When we converted the LDAPR/STLR instructions to decodetree we
accidentally introduced a regression where the offset is negative.
The 9-bit immediate field is signed, and the old hand decoder
correctly used sextract32() to get it out of the insn word,
but the ldapr_stlr_i pattern in the decode file used "imm:9"
instead of "imm:s9", so it treated the field as unsigned.

Fix the pattern to treat the field as a signed immediate.

Cc: qemu-stable@nongnu.org
Fixes: 2521b6073b ("target/arm: Convert LDAPR/STLR (imm) to decodetree")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2419
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240709134504.3500007-2-peter.maydell@linaro.org
2024-07-18 13:49:28 +01:00
Richard Henderson
0d9f1016d4 RISC-V PR for 9.1
* Support the zimop, zcmop, zama16b and zabha extensions
 * Validate the mode when setting vstvec CSR
 * Add decode support for Zawrs extension
 * Update the KVM regs to Linux 6.10-rc5
 * Add smcntrpmf extension support
 * Raise an exception when CSRRS/CSRRC writes a read-only CSR
 * Re-insert and deprecate 'riscv,delegate' in virt machine device tree
 * roms/opensbi: Update to v1.5
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEaukCtqfKh31tZZKWr3yVEwxTgBMFAmaYeUcACgkQr3yVEwxT
 gBMtdw//U2NbmnmECa0uXuE7fdFul0tUkl2oHb9Cr8g5Se5g/HVFqexAKOFZ8Lcm
 DvTl94zJ2dms4RntcmJHwTIusa+oU6qqOekediotjgpeH4BHZNCOHe0E9hIAHn9F
 uoJ1P186L7VeVr7OFAAgSCE7F6egCk7iC0h8L8/vuL4xcuyfbZ2r7ybiTl1+45N2
 YBBv5/00wsYnyMeqRYYtyqgX9QR017JRqNSfTJSbKxhQM/L1GA1xxisUvIGeyDqc
 Pn8E3dMN6sscR6bPs4RP+SBi0JIlRCgth/jteSUkbYf42osw3/5sl4oK/e6Xiogo
 SjELOF7QJNxE8H6EUIScDaCVB5ZhvELZcuOL2NRdUuVDkjhWXM633HwfEcXkZdFK
 W/H9wOvNxPAJIOGXOpv10+MLmhdyIOZwE0uk6evHvdcTn3FP9DurdUCc1se0zKOA
 Qg/H6usTbLGNQ7KKTNQ6GpQ6u89iE1CIyZqYVvB1YuF5t7vtAmxvNk3SVZ6aq3VL
 lPJW2Zd1eO09Q+kRnBVDV7MV4OJrRNsU+ryd91NrSVo9aLADtyiNC28dCSkjU3Gn
 6YQZt65zHuhH5IBB/PGIPo7dLRT8KNWOiYVoy3c6p6DC6oXsKIibh0ue1nrVnnVQ
 NRqyxPYaj6P8zzqwTk+iJj36UXZZVtqPIhtRu9MrO6Opl2AbsXI=
 =pM6B
 -----END PGP SIGNATURE-----

Merge tag 'pull-riscv-to-apply-20240718-1' of https://github.com/alistair23/qemu into staging

RISC-V PR for 9.1

* Support the zimop, zcmop, zama16b and zabha extensions
* Validate the mode when setting vstvec CSR
* Add decode support for Zawrs extension
* Update the KVM regs to Linux 6.10-rc5
* Add smcntrpmf extension support
* Raise an exception when CSRRS/CSRRC writes a read-only CSR
* Re-insert and deprecate 'riscv,delegate' in virt machine device tree
* roms/opensbi: Update to v1.5

# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEEaukCtqfKh31tZZKWr3yVEwxTgBMFAmaYeUcACgkQr3yVEwxT
# gBMtdw//U2NbmnmECa0uXuE7fdFul0tUkl2oHb9Cr8g5Se5g/HVFqexAKOFZ8Lcm
# DvTl94zJ2dms4RntcmJHwTIusa+oU6qqOekediotjgpeH4BHZNCOHe0E9hIAHn9F
# uoJ1P186L7VeVr7OFAAgSCE7F6egCk7iC0h8L8/vuL4xcuyfbZ2r7ybiTl1+45N2
# YBBv5/00wsYnyMeqRYYtyqgX9QR017JRqNSfTJSbKxhQM/L1GA1xxisUvIGeyDqc
# Pn8E3dMN6sscR6bPs4RP+SBi0JIlRCgth/jteSUkbYf42osw3/5sl4oK/e6Xiogo
# SjELOF7QJNxE8H6EUIScDaCVB5ZhvELZcuOL2NRdUuVDkjhWXM633HwfEcXkZdFK
# W/H9wOvNxPAJIOGXOpv10+MLmhdyIOZwE0uk6evHvdcTn3FP9DurdUCc1se0zKOA
# Qg/H6usTbLGNQ7KKTNQ6GpQ6u89iE1CIyZqYVvB1YuF5t7vtAmxvNk3SVZ6aq3VL
# lPJW2Zd1eO09Q+kRnBVDV7MV4OJrRNsU+ryd91NrSVo9aLADtyiNC28dCSkjU3Gn
# 6YQZt65zHuhH5IBB/PGIPo7dLRT8KNWOiYVoy3c6p6DC6oXsKIibh0ue1nrVnnVQ
# NRqyxPYaj6P8zzqwTk+iJj36UXZZVtqPIhtRu9MrO6Opl2AbsXI=
# =pM6B
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 18 Jul 2024 12:09:11 PM AEST
# gpg:                using RSA key 6AE902B6A7CA877D6D659296AF7C95130C538013
# gpg: Good signature from "Alistair Francis <alistair@alistair23.me>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg:          There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 6AE9 02B6 A7CA 877D 6D65  9296 AF7C 9513 0C53 8013

* tag 'pull-riscv-to-apply-20240718-1' of https://github.com/alistair23/qemu: (30 commits)
  roms/opensbi: Update to v1.5
  hw/riscv/virt.c: re-insert and deprecate 'riscv,delegate'
  target/riscv: raise an exception when CSRRS/CSRRC writes a read-only CSR
  target/riscv: Expose the Smcntrpmf config
  target/riscv: Do not setup pmu timer if OF is disabled
  target/riscv: More accurately model priv mode filtering.
  target/riscv: Start counters from both mhpmcounter and mcountinhibit
  target/riscv: Enforce WARL behavior for scounteren/hcounteren
  target/riscv: Save counter values during countinhibit update
  target/riscv: Implement privilege mode filtering for cycle/instret
  target/riscv: Only set INH fields if priv mode is available
  target/riscv: Add cycle & instret privilege mode filtering support
  target/riscv: Add cycle & instret privilege mode filtering definitions
  target/riscv: Add cycle & instret privilege mode filtering properties
  target/riscv: Fix the predicate functions for mhpmeventhX CSRs
  target/riscv: Combine set_mode and set_virt functions.
  target/riscv/kvm: update KVM regs to Linux 6.10-rc5
  disas/riscv: Add decode for Zawrs extension
  target/riscv: Validate the mode in write_vstvec
  disas/riscv: Support zabha disassemble
  ...

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-18 21:23:24 +10:00
Yu-Ming Chang
38c83e8d3a target/riscv: raise an exception when CSRRS/CSRRC writes a read-only CSR
Both CSRRS and CSRRC always read the addressed CSR and cause any read side
effects regardless of rs1 and rd fields. Note that if rs1 specifies a register
holding a zero value other than x0, the instruction will still attempt to write
the unmodified value back to the CSR and will cause any attendant side effects.

So if CSRRS or CSRRC tries to write a read-only CSR with rs1 which specifies
a register holding a zero value, an illegal instruction exception should be
raised.

Signed-off-by: Yu-Ming Chang <yumin686@andestech.com>
Signed-off-by: Alvin Chang <alvinga@andestech.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <172100444279.18077.6893072378718059541-0@git.sr.ht>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:45 +10:00
Atish Patra
6f6592d62e target/riscv: Expose the Smcntrpmf config
Create a new config for Smcntrpmf extension so that it can be enabled/
disabled from the qemu commandline.

Signed-off-by: Atish Patra <atishp@rivosinc.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240711-smcntrpmf_v7-v8-13-b7c38ae7b263@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:45 +10:00
Atish Patra
dd4c123636 target/riscv: Do not setup pmu timer if OF is disabled
The timer is setup function is invoked in both hpmcounter
write and mcountinhibit write path. If the OF bit set, the
LCOFI interrupt is disabled. There is no benefitting in
setting up the qemu timer until LCOFI is cleared to indicate
that interrupts can be fired again.

Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-ID: <20240711-smcntrpmf_v7-v8-12-b7c38ae7b263@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:45 +10:00
Rajnesh Kanwal
74112400df target/riscv: More accurately model priv mode filtering.
In case of programmable counters configured to count inst/cycles
we often end-up with counter not incrementing at all from kernel's
perspective.

For example:
- Kernel configures hpm3 to count instructions and sets hpmcounter
  to -10000 and all modes except U mode are inhibited.
- In QEMU we configure a timer to expire after ~10000 instructions.
- Problem is, it's often the case that kernel might not even schedule
  Umode task and we hit the timer callback in QEMU.
- In the timer callback we inject the interrupt into kernel, kernel
  runs the handler and reads hpmcounter3 value.
- Given QEMU maintains individual counters to count for each privilege
  mode, and given umode never ran, the umode counter didn't increment
  and QEMU returns same value as was programmed by the kernel when
  starting the counter.
- Kernel checks for overflow using previous and current value of the
  counter and reprograms the counter given there wasn't an overflow
  as per the counter value. (Which itself is a problem. We have QEMU
  telling kernel that counter3 overflowed but the counter value
  returned by QEMU doesn't seem to reflect that.).

This change makes sure that timer is reprogrammed from the handler
if the counter didn't overflow based on the counter value.

Second, this change makes sure that whenever the counter is read,
it's value is updated to reflect the latest count.

Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20240711-smcntrpmf_v7-v8-11-b7c38ae7b263@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:45 +10:00
Rajnesh Kanwal
22c721c34c target/riscv: Start counters from both mhpmcounter and mcountinhibit
Currently we start timer counter from write_mhpmcounter path only
without checking for mcountinhibit bit. This changes adds mcountinhibit
check and also programs the counter from write_mcountinhibit as well.

When a counter is stopped using mcountinhibit we simply update
the value of the counter based on current host ticks and save
it for future reads.

We don't need to disable running timer as pmu_timer_trigger_irq
will discard the interrupt if the counter has been inhibited.

Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20240711-smcntrpmf_v7-v8-10-b7c38ae7b263@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:45 +10:00
Atish Patra
8cff74c26d target/riscv: Enforce WARL behavior for scounteren/hcounteren
scounteren/hcountern are also WARL registers similar to mcountern.
Only set the bits for the available counters during the write to
preserve the WARL behavior.

Signed-off-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240711-smcntrpmf_v7-v8-9-b7c38ae7b263@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:44 +10:00
Atish Patra
46023470e0 target/riscv: Save counter values during countinhibit update
Currently, if a counter monitoring cycle/instret is stopped via
mcountinhibit we just update the state while the value is saved
during the next read. This is not accurate as the read may happen
many cycles after the counter is stopped. Ideally, the read should
return the value saved when the counter is stopped.

Thus, save the value of the counter during the inhibit update
operation and return that value during the read if corresponding bit
in mcountihibit is set.

Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-ID: <20240711-smcntrpmf_v7-v8-8-b7c38ae7b263@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:44 +10:00
Atish Patra
b2d7a7c7e4 target/riscv: Implement privilege mode filtering for cycle/instret
Privilege mode filtering can also be emulated for cycle/instret by
tracking host_ticks/icount during each privilege mode switch. This
patch implements that for both cycle/instret and mhpmcounters. The
first one requires Smcntrpmf while the other one requires Sscofpmf
to be enabled.

The cycle/instret are still computed using host ticks when icount
is not enabled. Otherwise, they are computed using raw icount which
is more accurate in icount mode.

Co-Developed-by: Rajnesh Kanwal <rkanwal@rivosinc.com>
Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-ID: <20240711-smcntrpmf_v7-v8-7-b7c38ae7b263@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:44 +10:00
Atish Patra
3b31b7baff target/riscv: Only set INH fields if priv mode is available
Currently, the INH fields are set in mhpmevent uncoditionally
without checking if a particular priv mode is supported or not.

Suggested-by: Alistair Francis <alistair23@gmail.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240711-smcntrpmf_v7-v8-6-b7c38ae7b263@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:44 +10:00
Kaiwen Xue
b54a84c15e target/riscv: Add cycle & instret privilege mode filtering support
QEMU only calculates dummy cycles and instructions, so there is no
actual means to stop the icount in QEMU. Hence this patch merely adds
the functionality of accessing the cfg registers, and cause no actual
effects on the counting of cycle and instret counters.

Signed-off-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240711-smcntrpmf_v7-v8-5-b7c38ae7b263@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:44 +10:00
Kaiwen Xue
6d1e3893cf target/riscv: Add cycle & instret privilege mode filtering definitions
This adds the definitions for ISA extension smcntrpmf.

Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-ID: <20240711-smcntrpmf_v7-v8-4-b7c38ae7b263@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:44 +10:00
Kaiwen Xue
251dccc09a target/riscv: Add cycle & instret privilege mode filtering properties
This adds the properties for ISA extension smcntrpmf. Patches
implementing it will follow.

Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240711-smcntrpmf_v7-v8-3-b7c38ae7b263@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:44 +10:00
Atish Patra
be470e5977 target/riscv: Fix the predicate functions for mhpmeventhX CSRs
mhpmeventhX CSRs are available for RV32. The predicate function
should check that first before checking sscofpmf extension.

Fixes: 1466448345 ("target/riscv: Add sscofpmf extension support")
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-ID: <20240711-smcntrpmf_v7-v8-2-b7c38ae7b263@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:44 +10:00
Rajnesh Kanwal
68c05fb530 target/riscv: Combine set_mode and set_virt functions.
Combining riscv_cpu_set_virt_enabled() and riscv_cpu_set_mode()
functions. This is to make complete mode change information
available through a single function.

This allows to easily differentiate between HS->VS, VS->HS
and VS->VS transitions when executing state update codes.
For example: One use-case which inspired this change is
to update mode-specific instruction and cycle counters
which requires information of both prev mode and current
mode.

Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20240711-smcntrpmf_v7-v8-1-b7c38ae7b263@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:44 +10:00
Daniel Henrique Barboza
3cb9f20499 target/riscv/kvm: update KVM regs to Linux 6.10-rc5
Two new regs added: ztso and zacas.

Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240709085431.455541-1-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:44 +10:00
Jiayi Li
910c18a917 target/riscv: Validate the mode in write_vstvec
Base on the riscv-privileged spec, vstvec substitutes for the usual stvec.
Therefore, the encoding of the MODE should also be restricted to 0 and 1.

Signed-off-by: Jiayi Li <lijiayi@eswincomputing.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Message-ID: <20240701022553.1982-1-lijiayi@eswincomputing.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:08:44 +10:00
LIU Zhiwei
8aebaa2591 target/riscv: Expose zabha extension as a cpu property
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240709113652.1239-11-zhiwei_liu@linux.alibaba.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:00:42 +10:00
LIU Zhiwei
d34e406602 target/riscv: Add amocas.[b|h] for Zabha
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240709113652.1239-10-zhiwei_liu@linux.alibaba.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:00:42 +10:00
LIU Zhiwei
8d07887bcb target/riscv: Move gen_cmpxchg before adding amocas.[b|h]
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240709113652.1239-9-zhiwei_liu@linux.alibaba.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:00:42 +10:00
LIU Zhiwei
be4a8db7f3 target/riscv: Add AMO instructions for Zabha
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240709113652.1239-8-zhiwei_liu@linux.alibaba.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:00:42 +10:00
LIU Zhiwei
24da9cbaca target/riscv: Move gen_amo before implement Zabha
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240709113652.1239-7-zhiwei_liu@linux.alibaba.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:00:42 +10:00
LIU Zhiwei
a60ce58fd9 target/riscv: Support Zama16b extension
Zama16b is the property that misaligned load/stores/atomics within
a naturally aligned 16-byte region are atomic.

According to the specification, Zama16b applies only to AMOs, loads
and stores defined in the base ISAs, and loads and stores of no more
than XLEN bits defined in the F, D, and Q extensions. Thus it should
not apply to zacas or RVC instructions.

For an instruction in that set, if all accessed bytes lie within 16B granule,
the instruction will not raise an exception for reasons of address alignment,
and the instruction will give rise to only one memory operation for the
purposes of RVWMO—i.e., it will execute atomically.

Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20240709113652.1239-6-zhiwei_liu@linux.alibaba.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:00:42 +10:00
LIU Zhiwei
197e4d2988 target/riscv: Add zcmop extension
Zcmop defines eight 16-bit MOP instructions named C.MOP.n, where n is
an odd integer between 1 and 15, inclusive. C.MOP.n is encoded in
the reserved encoding space corresponding to C.LUI xn, 0.

Unlike the MOPs defined in the Zimop extension, the C.MOP.n instructions
are defined to not write any register.

In current implementation, C.MOP.n only has an check function, without any
other more behavior.

Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Deepak Gupta <debug@rivosinc.com>
Message-ID: <20240709113652.1239-4-zhiwei_liu@linux.alibaba.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:00:42 +10:00
LIU Zhiwei
6eab278d38 target/riscv: Add zimop extension
Zimop extension defines an encoding space for 40 MOPs.The Zimop
extension defines 32 MOP instructions named MOP.R.n, where n is
an integer between 0 and 31, inclusive. The Zimop extension
additionally defines 8 MOP instructions named MOP.RR.n, where n
is an integer between 0 and 7.

These 40 MOPs initially are defined to simply write zero to x[rd],
but are designed to be redefined by later extensions to perform some
other action.

Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Deepak Gupta <debug@rivosinc.com>
Message-ID: <20240709113652.1239-2-zhiwei_liu@linux.alibaba.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18 12:00:42 +10:00
Richard Henderson
d74ec4d7dd trivial patches for 2024-07-17
-----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCAAdFiEEe3O61ovnosKJMUsicBtPaxppPlkFAmaXpakACgkQcBtPaxpp
 Plnvvwf8DdybFjyhAVmiG6+6WhB5s0hJhZRiWzUY6ieMbgPzCUgWzfr/pJh6q44x
 rw+aVfe2kf1ysycx3DjcJpucrC1rQD/qV6dB3IA1rxidBOZfCb8iZwoaB6yS9Epp
 4uXIdfje4zO6oCMN17MTXvuQIEUK3ZHN0EQOs7vsA2d8/pHqBqRoixjz9KnKHlpk
 P6kyIXceZ4wLAtwFJqa/mBBRnpcSdaWuQpzpBsg1E3BXRXXfeuXJ8WmGp0kEOpzQ
 k7+2sPpuah2z7D+jNFBW0+3ZYDvO9Z4pomQ4al4w+DHDyWBF49WnnSdDSDbWwxI5
 K0vUlsDVU8yTnIEgN8BL82F8eub5Ug==
 =ZYHJ
 -----END PGP SIGNATURE-----

Merge tag 'pull-trivial-patches' of https://gitlab.com/mjt0k/qemu into staging

trivial patches for 2024-07-17

# -----BEGIN PGP SIGNATURE-----
#
# iQEzBAABCAAdFiEEe3O61ovnosKJMUsicBtPaxppPlkFAmaXpakACgkQcBtPaxpp
# Plnvvwf8DdybFjyhAVmiG6+6WhB5s0hJhZRiWzUY6ieMbgPzCUgWzfr/pJh6q44x
# rw+aVfe2kf1ysycx3DjcJpucrC1rQD/qV6dB3IA1rxidBOZfCb8iZwoaB6yS9Epp
# 4uXIdfje4zO6oCMN17MTXvuQIEUK3ZHN0EQOs7vsA2d8/pHqBqRoixjz9KnKHlpk
# P6kyIXceZ4wLAtwFJqa/mBBRnpcSdaWuQpzpBsg1E3BXRXXfeuXJ8WmGp0kEOpzQ
# k7+2sPpuah2z7D+jNFBW0+3ZYDvO9Z4pomQ4al4w+DHDyWBF49WnnSdDSDbWwxI5
# K0vUlsDVU8yTnIEgN8BL82F8eub5Ug==
# =ZYHJ
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 17 Jul 2024 09:06:17 PM AEST
# gpg:                using RSA key 7B73BAD68BE7A2C289314B22701B4F6B1A693E59
# gpg: Good signature from "Michael Tokarev <mjt@tls.msk.ru>" [full]
# gpg:                 aka "Michael Tokarev <mjt@debian.org>" [full]
# gpg:                 aka "Michael Tokarev <mjt@corpit.ru>" [full]

* tag 'pull-trivial-patches' of https://gitlab.com/mjt0k/qemu:
  meson: Update meson-buildoptions.sh
  backends/rng-random: Get rid of qemu_open_old()
  backends/iommufd: Get rid of qemu_open_old()
  backends/hostmem-epc: Get rid of qemu_open_old()
  hw/vfio/container: Get rid of qemu_open_old()
  hw/usb/u2f-passthru: Get rid of qemu_open_old()
  hw/usb/host-libusb: Get rid of qemu_open_old()
  hw/i386/sgx: Get rid of qemu_open_old()
  tests/avocado: Remove the non-working virtio_check_params test
  doc/net/l2tpv3: Update boolean fields' description to avoid short-form use
  target/hexagon/imported/mmvec: Fix superfluous trailing semicolon
  util/oslib-posix: Fix superfluous trailing semicolon
  hw/i386/x86: Fix superfluous trailing semicolon
  accel/kvm/kvm-all: Fix superfluous trailing semicolon
  README.rst: add the missing punctuations
  block/curl: rewrite http header parsing function

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-18 10:07:23 +10:00
Zhao Liu
29ea19465b target/hexagon/imported/mmvec: Fix superfluous trailing semicolon
Fix the superfluous trailing semicolon in target/hexagon/imported/mmvec/
ext.idef.

Cc: Brian Cain <bcain@quicinc.com>
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Brian Cain <bcain@quicinc.com>
Reviewed-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-07-17 14:04:15 +03:00
Richard Henderson
58ee924b97 * target/i386/tcg: fixes for seg_helper.c
* SEV: Don't allow automatic fallback to legacy KVM_SEV_INIT,
   but also don't use it by default
 * scsi: honor bootindex again for legacy drives
 * hpet, utils, scsi, build, cpu: miscellaneous bugfixes
 -----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmaWoP0UHHBib256aW5p
 QHJlZGhhdC5jb20ACgkQv/vSX3jHroOqfggAg3jxUp6B8dFTEid5aV6qvT4M6nwD
 TAYcAl5kRqTOklEmXiPCoA5PeS0rbr+5xzWLAKgkumjCVXbxMoYSr0xJHVuDwQWv
 XunUm4kpxJBLKK3uTGAIW9A21thOaA5eAoLIcqu2smBMU953TBevMqA7T67h22rp
 y8NnZWWdyQRH0RAaWsCBaHVkkf+DuHSG5LHMYhkdyxzno+UWkTADFppVhaDO78Ba
 Egk49oMO+G6of4+dY//p1OtAkAf4bEHePKgxnbZePInJrkgHzr0TJWf9gERWFzdK
 JiM0q6DeqopZm+vENxS+WOx7AyDzdN0qOrf6t9bziXMg0Rr2Z8bu01yBCQ==
 =cZhV
 -----END PGP SIGNATURE-----

Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging

* target/i386/tcg: fixes for seg_helper.c
* SEV: Don't allow automatic fallback to legacy KVM_SEV_INIT,
  but also don't use it by default
* scsi: honor bootindex again for legacy drives
* hpet, utils, scsi, build, cpu: miscellaneous bugfixes

# -----BEGIN PGP SIGNATURE-----
#
# iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmaWoP0UHHBib256aW5p
# QHJlZGhhdC5jb20ACgkQv/vSX3jHroOqfggAg3jxUp6B8dFTEid5aV6qvT4M6nwD
# TAYcAl5kRqTOklEmXiPCoA5PeS0rbr+5xzWLAKgkumjCVXbxMoYSr0xJHVuDwQWv
# XunUm4kpxJBLKK3uTGAIW9A21thOaA5eAoLIcqu2smBMU953TBevMqA7T67h22rp
# y8NnZWWdyQRH0RAaWsCBaHVkkf+DuHSG5LHMYhkdyxzno+UWkTADFppVhaDO78Ba
# Egk49oMO+G6of4+dY//p1OtAkAf4bEHePKgxnbZePInJrkgHzr0TJWf9gERWFzdK
# JiM0q6DeqopZm+vENxS+WOx7AyDzdN0qOrf6t9bziXMg0Rr2Z8bu01yBCQ==
# =cZhV
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 17 Jul 2024 02:34:05 AM AEST
# gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg:                issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]

* tag 'for-upstream' of https://gitlab.com/bonzini/qemu:
  target/i386/tcg: save current task state before loading new one
  target/i386/tcg: use X86Access for TSS access
  target/i386/tcg: check for correct busy state before switching to a new task
  target/i386/tcg: Compute MMU index once
  target/i386/tcg: Introduce x86_mmu_index_{kernel_,}pl
  target/i386/tcg: Reorg push/pop within seg_helper.c
  target/i386/tcg: use PUSHL/PUSHW for error code
  target/i386/tcg: Allow IRET from user mode to user mode with SMAP
  target/i386/tcg: Remove SEG_ADDL
  target/i386/tcg: fix POP to memory in long mode
  hpet: fix HPET_TN_SETVAL for high 32-bits of the comparator
  hpet: fix clamping of period
  docs: Update description of 'user=username' for '-run-with'
  qemu/timer: Add host ticks function for LoongArch
  scsi: fix regression and honor bootindex again for legacy drives
  hw/scsi/lsi53c895a: bump instruction limit in scripts processing to fix regression
  disas: Fix build against Capstone v6
  cpu: Free queued CPU work
  Revert "qemu-char: do not operate on sources from finalize callbacks"
  i386/sev: Don't allow automatic fallback to legacy KVM_SEV*_INIT

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-17 15:40:28 +10:00
Peter Maydell
de680286b5 accel/tcg: Make cpu_exec_interrupt hook mandatory
The TCGCPUOps::cpu_exec_interrupt hook is currently not mandatory; if
it is left NULL then we treat it as if it had returned false. However
since pretty much every architecture needs to handle interrupts,
almost every target we have provides the hook. The one exception is
Tricore, which doesn't currently implement the architectural
interrupt handling.

Add a "do nothing" implementation of cpu_exec_hook for Tricore,
assert on startup that the CPU does provide the hook, and remove
the runtime NULL check before calling it.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20240712113949.4146855-1-peter.maydell@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-07-16 20:04:08 +02:00
Paolo Bonzini
6a079f2e68 target/i386/tcg: save current task state before loading new one
This is how the steps are ordered in the manual.  EFLAGS.NT is
overwritten after the fact in the saved image.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16 18:18:25 +02:00
Paolo Bonzini
8b13106508 target/i386/tcg: use X86Access for TSS access
This takes care of probing the vaddr range in advance, and is also faster
because it avoids repeated TLB lookups.  It also matches the Intel manual
better, as it says "Checks that the current (old) TSS, new TSS, and all
segment descriptors used in the task switch are paged into system memory";
note however that it's not clear how the processor checks for segment
descriptors, and this check is not included in the AMD manual.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16 18:18:25 +02:00
Paolo Bonzini
05d41bbcb3 target/i386/tcg: check for correct busy state before switching to a new task
This step is listed in the Intel manual: "Checks that the new task is available
(call, jump, exception, or interrupt) or busy (IRET return)".

The AMD manual lists the same operation under the "Preventing recursion"
paragraph of "12.3.4 Nesting Tasks", though it is not clear if the processor
checks the busy bit in the IRET case.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16 18:18:24 +02:00
Paolo Bonzini
8053862af9 target/i386/tcg: Compute MMU index once
Add the MMU index to the StackAccess struct, so that it can be cached
or (in the next patch) computed from information that is not in
CPUX86State.

Co-developed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16 18:18:24 +02:00
Richard Henderson
fffe424b38 target/i386/tcg: Introduce x86_mmu_index_{kernel_,}pl
Disconnect mmu index computation from the current pl
as stored in env->hflags.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Link: https://lore.kernel.org/r/20240617161210.4639-2-richard.henderson@linaro.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16 18:18:24 +02:00
Richard Henderson
059368bcf5 target/i386/tcg: Reorg push/pop within seg_helper.c
Interrupts and call gates should use accesses with the DPL as
the privilege level.  While computing the applicable MMU index
is easy, the harder thing is how to plumb it in the code.

One possibility could be to add a single argument to the PUSH* macros
for the privilege level, but this is repetitive and risks confusion
between the involved privilege levels.

Another possibility is to pass both CPL and DPL, and adjusting both
PUSH* and POP* to use specific privilege levels (instead of using
cpu_{ld,st}*_data). This makes the code more symmetric.

However, a more complicated but much nicer approach is to use a structure
to contain the stack parameters, env, unwind return address, and rewrite
the macros into functions.  The struct provides an easy home for the MMU
index as well.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Link: https://lore.kernel.org/r/20240617161210.4639-4-richard.henderson@linaro.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16 18:18:24 +02:00
Paolo Bonzini
312ef3243e target/i386/tcg: use PUSHL/PUSHW for error code
Do not pre-decrement esp, let the macros subtract the appropriate
operand size.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16 18:18:24 +02:00
Paolo Bonzini
0bd385e7e3 target/i386/tcg: Allow IRET from user mode to user mode with SMAP
This fixes a bug wherein i386/tcg assumed an interrupt return using
the IRET instruction was always returning from kernel mode to either
kernel mode or user mode. This assumption is violated when IRET is used
as a clever way to restore thread state, as for example in the dotnet
runtime. There, IRET returns from user mode to user mode.

This bug is that stack accesses from IRET and RETF, as well as accesses
to the parameters in a call gate, are normal data accesses using the
current CPL.  This manifested itself as a page fault in the guest Linux
kernel due to SMAP preventing the access.

This bug appears to have been in QEMU since the beginning.

Analyzed-by: Robert R. Henry <rrh.henry@gmail.com>
Co-developed-by: Robert R. Henry <rrh.henry@gmail.com>
Signed-off-by: Robert R. Henry <rrh.henry@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16 18:18:24 +02:00
Richard Henderson
a7cf494993 target/i386/tcg: Remove SEG_ADDL
This truncation is now handled by MMU_*32_IDX.  The introduction of
MMU_*32_IDX in fact applied correct 32-bit wraparound to 16-bit accesses
with a high segment base (e.g.  big real mode or vm86 mode), which did
not use SEG_ADDL.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Link: https://lore.kernel.org/r/20240617161210.4639-3-richard.henderson@linaro.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16 18:18:24 +02:00
Paolo Bonzini
3afc6539a8 target/i386/tcg: fix POP to memory in long mode
In long mode, POP to memory will write a full 64-bit value.  However,
the call to gen_writeback() in gen_POP will use MO_32 because the
decoding table is incorrect.

The bug was latent until commit aea49fbb01 ("target/i386: use gen_writeback()
within gen_POP()", 2024-06-08), and then became visible because gen_op_st_v
now receives op->ot instead of the "ot" returned by gen_pop_T0.

Analyzed-by: Clément Chigot <chigot@adacore.com>
Fixes: 5e9e21bcc4 ("target/i386: move 60-BF opcodes to new decoder", 2024-05-07)
Tested-by: Clément Chigot <chigot@adacore.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16 18:18:24 +02:00
Michael Roth
9d38d9dca2 i386/sev: Don't allow automatic fallback to legacy KVM_SEV*_INIT
Currently if the 'legacy-vm-type' property of the sev-guest object is
'on', QEMU will attempt to use the newer KVM_SEV_INIT2 kernel
interface in conjunction with the newer KVM_X86_SEV_VM and
KVM_X86_SEV_ES_VM KVM VM types.

This can lead to measurement changes if, for instance, an SEV guest was
created on a host that originally had an older kernel that didn't
support KVM_SEV_INIT2, but is booted on the same host later on after the
host kernel was upgraded.

Instead, if legacy-vm-type is 'off', QEMU should fail if the
KVM_SEV_INIT2 interface is not provided by the current host kernel.
Modify the fallback handling accordingly.

In the future, VMSA features and other flags might be added to QEMU
which will require legacy-vm-type to be 'off' because they will rely
on the newer KVM_SEV_INIT2 interface. It may be difficult to convey to
users what values of legacy-vm-type are compatible with which
features/options, so as part of this rework, switch legacy-vm-type to a
tri-state OnOffAuto option. 'auto' in this case will automatically
switch to using the newer KVM_SEV_INIT2, but only if it is required to
make use of new VMSA features or other options only available via
KVM_SEV_INIT2.

Defining 'auto' in this way would avoid inadvertantly breaking
compatibility with older kernels since it would only be used in cases
where users opt into newer features that are only available via
KVM_SEV_INIT2 and newer kernels, and provide better default behavior
than the legacy-vm-type=off behavior that was previously in place, so
make it the default for 9.1+ machine types.

Cc: Daniel P. Berrangé <berrange@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
cc: kvm@vger.kernel.org
Signed-off-by: Michael Roth <michael.roth@amd.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Link: https://lore.kernel.org/r/20240710041005.83720-1-michael.roth@amd.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16 10:45:06 +02:00
Song Gao
3ef4b21a5c target/loongarch: Fix cpu_reset set wrong CSR_CRMD
After cpu_reset, DATF in CSR_CRMD is 0, DATM is 0.
See the manual[1] 6.4.

  [1]: https://github.com/loongson/LoongArch-Documentation/releases/download/2023.04.20/LoongArch-Vol1-v1.10-EN.pdf

Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Message-Id: <20240705021839.1004374-2-gaosong@loongson.cn>
2024-07-12 09:41:18 +08:00
Song Gao
bba1c36da0 target/loongarch: Set CSR_PRCFG1 and CSR_PRCFG2 values
We set the value of register CSR_PRCFG3, but left out CSR_PRCFG1
and CSR_PRCFG2. Set CSR_PRCFG1 and CSR_PRCFG2 according to the
default values of the physical machine.

Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Message-Id: <20240705021839.1004374-1-gaosong@loongson.cn>
2024-07-12 09:41:18 +08:00
Feiyang Chen
785875874d target/loongarch: Remove avail_64 in trans_srai_w() and simplify it
Since srai.w is a valid instruction on la32, remove the avail_64 check
and simplify trans_srai_w().

Fixes: c0c0461e3a ("target/loongarch: Add avail_64 to check la64-only instructions")
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Feiyang Chen <chris.chenfeiyang@gmail.com>
Message-Id: <20240628033357.50027-1-chris.chenfeiyang@gmail.com>
Signed-off-by: Song Gao <gaosong@loongson.cn>
2024-07-12 09:41:18 +08:00