Commit Graph

12614 Commits

Author SHA1 Message Date
Daniel Henrique Barboza
f25974f46a target/riscv/kvm: add RISCV_CONFIG_REG()
Create a RISCV_CONFIG_REG() macro, similar to what other regs use, to
hide away some of the boilerplate.

Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Message-ID: <20231208183835.2411523-5-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-01-10 18:47:46 +10:00
Daniel Henrique Barboza
10f86d1b84 target/riscv/kvm: change timer regs size to u64
KVM_REG_RISCV_TIMER regs are always u64 according to the KVM API, but at
this moment we'll return u32 regs if we're running a RISCV32 target.

Use the kvm_riscv_reg_id_u64() helper in RISCV_TIMER_REG() to fix it.

Reported-by: Andrew Jones <ajones@ventanamicro.com>
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Message-ID: <20231208183835.2411523-4-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-01-10 18:47:46 +10:00
Daniel Henrique Barboza
450bd6618f target/riscv/kvm: change KVM_REG_RISCV_FP_D to u64
KVM_REG_RISCV_FP_D regs are always u64 size. Using kvm_riscv_reg_id() in
RISCV_FP_D_REG() ends up encoding the wrong size if we're running with
TARGET_RISCV32.

Create a new helper that returns a KVM ID with u64 size and use it with
RISCV_FP_D_REG().

Reported-by: Andrew Jones <ajones@ventanamicro.com>
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Message-ID: <20231208183835.2411523-3-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-01-10 18:47:46 +10:00
Daniel Henrique Barboza
49c211ffca target/riscv/kvm: change KVM_REG_RISCV_FP_F to u32
KVM_REG_RISCV_FP_F regs have u32 size according to the API, but by using
kvm_riscv_reg_id() in RISCV_FP_F_REG() we're returning u64 sizes when
running with TARGET_RISCV64. The most likely reason why no one noticed
this is because we're not implementing kvm_cpu_synchronize_state() in
RISC-V yet.

Create a new helper that returns a KVM ID with u32 size and use it in
RISCV_FP_F_REG().

Reported-by: Andrew Jones <ajones@ventanamicro.com>
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Message-ID: <20231208183835.2411523-2-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-01-10 18:47:46 +10:00
Daniel Henrique Barboza
8d326cb88b target/riscv/cpu.c: fix machine IDs getters
mvendorid is an uint32 property, mimpid/marchid are uint64 properties.
But their getters are returning bools. The reason this went under the
radar for this long is because we have no code using the getters.

The problem can be seem via the 'qom-get' API though. Launching QEMU
with the 'veyron-v1' CPU, a model with:

VEYRON_V1_MVENDORID: 0x61f (1567)
VEYRON_V1_MIMPID: 0x111 (273)
VEYRON_V1_MARCHID: 0x8000000000010000 (9223372036854841344)

This is what the API returns when retrieving these properties:

(qemu) qom-get /machine/soc0/harts[0] mvendorid
true
(qemu) qom-get /machine/soc0/harts[0] mimpid
true
(qemu) qom-get /machine/soc0/harts[0] marchid
true

After this patch:

(qemu) qom-get /machine/soc0/harts[0] mvendorid
1567
(qemu) qom-get /machine/soc0/harts[0] mimpid
273
(qemu) qom-get /machine/soc0/harts[0] marchid
9223372036854841344

Fixes: 1e34150045 ("target/riscv/cpu.c: restrict 'mvendorid' value")
Fixes: a1863ad368 ("target/riscv/cpu.c: restrict 'mimpid' value")
Fixes: d6a427e2c0 ("target/riscv/cpu.c: restrict 'marchid' value")
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20231211170732.2541368-1-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-01-10 18:47:46 +10:00
Ivan Klokov
6f5bb7d405 target/riscv/pmp: Use hwaddr instead of target_ulong for RV32
The Sv32 page-based virtual-memory scheme described in RISCV privileged
spec Section 5.3 supports 34-bit physical addresses for RV32, so the
PMP scheme must support addresses wider than XLEN for RV32. However,
PMP address register format is still 32 bit wide.

Signed-off-by: Ivan Klokov <ivan.klokov@syntacore.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20231123091214.20312-1-ivan.klokov@syntacore.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-01-10 18:47:46 +10:00
LIU Zhiwei
7767f8b122 target/riscv: Not allow write mstatus_vs without RVV
If CPU does not implement the Vector extension, it usually means
mstatus vs hardwire to zero. So we should not allow write a
non-zero value to this field.

Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20231215023313.1708-1-zhiwei_liu@linux.alibaba.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-01-10 18:47:46 +10:00
LIU Zhiwei
564a28bda1 target/riscv: Fix th.dcache.cval1 priviledge check
According to the specification, the th.dcache.cvall1 can be executed
under all priviledges.
The specification about xtheadcmo located in,
https://github.com/T-head-Semi/thead-extension-spec/blob/master/xtheadcmo/dcache_cval1.adoc

Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Christoph Muellner <christoph.muellner@vrull.eu>
Message-ID: <20231208094315.177-1-zhiwei_liu@linux.alibaba.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-01-10 18:47:46 +10:00
Max Chou
79fc6d38a8 target/riscv: The whole vector register move instructions depend on vsew
The RISC-V v spec 16.6 section says that the whole vector register move
instructions operate as if EEW=SEW. So it should depends on the vsew
field of vtype register.

Signed-off-by: Max Chou <max.chou@sifive.com>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20231129170400.21251-3-max.chou@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-01-10 18:47:46 +10:00
Max Chou
4eff52cd46 target/riscv: Add vill check for whole vector register move instructions
The ratified version of RISC-V V spec section 16.6 says that
`The instructions operate as if EEW=SEW`.

So the whole vector register move instructions depend on the vtype
register that means the whole vector register move instructions should
raise an illegal-instruction exception when vtype.vill=1.

Signed-off-by: Max Chou <max.chou@sifive.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20231129170400.21251-2-max.chou@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-01-10 18:47:46 +10:00
Peter Maydell
e2862554c2 target/arm: Add FEAT_NV2 to max, neoverse-n2, neoverse-v1 CPUs
Enable FEAT_NV2 on the 'max' CPU, and stop filtering it out for
the Neoverse N2 and Neoverse V1 CPUs.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:44:45 +00:00
Peter Maydell
3b32140e70 target/arm: Enhance CPU_LOG_INT to show SPSR on AArch64 exception-entry
We already print various lines of information when we take an
exception, including the ELR and (if relevant) the FAR. Now
that FEAT_NV means that we might report something other than
the old PSTATE to the guest as the SPSR, it's worth logging
this as well.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:44:45 +00:00
Peter Maydell
bde0e60be4 target/arm: Report HCR_EL2.{NV,NV1,NV2} in cpu dumps
When interpreting CPU dumps where FEAT_NV and FEAT_NV2 are in use,
it's helpful to include the values of HCR_EL2.{NV,NV1,NV2} in the CPU
dump format, as a way of distinguishing when we are in EL1 as part of
executing guest-EL2 and when we are just in normal EL1.

Add the bits to the end of the log line that shows PSTATE and similar
information:

PSTATE=000003c9 ---- EL2h  BTYPE=0 NV NV2

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:44:45 +00:00
Peter Maydell
f5bd261a61 target/arm: Mark up VNCR offsets (offsets >= 0x200, except GIC)
Mark up the cpreginfo structs to indicate offsets for system
registers from VNCR_EL2, as defined in table D8-66 in rule R_CSRPQ in
the Arm ARM.  This covers all the remaining offsets at 0x200 and
above, except for the GIC ICH_* registers.

(Note that because we don't implement FEAT_SPE, FEAT_TRF,
FEAT_MPAM, FEAT_BRBE or FEAT_AMUv1p1 we don't implement any
of the registers that use offsets at 0x800 and above.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:44:45 +00:00
Peter Maydell
46932cf26e target/arm: Mark up VNCR offsets (offsets 0x168..0x1f8)
Mark up the cpreginfo structs to indicate offsets for system
registers from VNCR_EL2, as defined in table D8-66 in rule R_CSRPQ in
the Arm ARM.  This commit covers offsets 0x168 to 0x1f8.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:44:45 +00:00
Peter Maydell
bb7b95b070 target/arm: Mark up VNCR offsets (offsets 0x100..0x160)
Mark up the cpreginfo structs to indicate offsets for system
registers from VNCR_EL2, as defined in table D8-66 in rule R_CSRPQ in
the Arm ARM.  This commit covers offsets 0x100 to 0x160.

Many (but not all) of the registers in this range have _EL12 aliases,
and the slot in memory is shared between the _EL12 version of the
register and the _EL1 version.  Where we programmatically generate
the regdef for the _EL12 register, arrange that its
nv2_redirect_offset is set up correctly to do this.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:44:45 +00:00
Peter Maydell
dfe8a9ee6a target/arm: Mark up VNCR offsets (offsets 0x0..0xff)
Mark up the cpreginfo structs to indicate offsets for system
registers from VNCR_EL2, as defined in table D8-66 in rule R_CSRPQ in
the Arm ARM. This commit covers offsets below 0x100; all of these
registers are redirected to memory regardless of the value of
HCR_EL2.NV1.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:44:45 +00:00
Peter Maydell
674e534527 target/arm: Report VNCR_EL2 based faults correctly
If FEAT_NV2 redirects a system register access to a memory offset
from VNCR_EL2, that access might fault.  In this case we need to
report the correct syndrome information:
 * Data Abort, from same-EL
 * no ISS information
 * the VNCR bit (bit 13) is set

and the exception must be taken to EL2.

Save an appropriate syndrome template when generating code; we can
then use that to:
 * select the right target EL
 * reconstitute a correct final syndrome for the data abort
 * report the right syndrome if we take a FEAT_RME granule protection
   fault on the VNCR-based write

Note that because VNCR is bit 13, we must start keeping bit 13 in
template syndromes, by adjusting ARM_INSN_START_WORD2_SHIFT.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:44:45 +00:00
Peter Maydell
daf9b4a00f target/arm: Implement FEAT_NV2 redirection of sysregs to RAM
FEAT_NV2 requires that when HCR_EL2.{NV,NV2} == 0b11 then accesses by
EL1 to certain system registers are redirected to RAM.  The full list
of affected registers is in the table in rule R_CSRPQ in the Arm ARM.
The registers may be normally accessible at EL1 (like ACTLR_EL1), or
normally UNDEF at EL1 (like HCR_EL2).  Some registers redirect to RAM
only when HCR_EL2.NV1 is 0, and some only when HCR_EL2.NV1 is 1;
others trap in both cases.

Add the infrastructure for identifying which registers should be
redirected and turning them into memory accesses.

This code does not set the correct syndrome or arrange for the
exception to be taken to the correct target EL if the access via
VNCR_EL2 faults; we will do that in the next commit.

Subsequent commits will mark up the relevant regdefs to set their
nv2_redirect_offset, and if relevant one of the two flags which
indicates that the redirect happens only for a particular value of
HCR_EL2.NV1.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2024-01-09 14:43:53 +00:00
Peter Maydell
c35da11df4 target/arm: Handle FEAT_NV2 redirection of SPSR_EL2, ELR_EL2, ESR_EL2, FAR_EL2
Under FEAT_NV2, when HCR_EL2.{NV,NV2} == 0b11 at EL1, accesses to the
registers SPSR_EL2, ELR_EL2, ESR_EL2, FAR_EL2 and TFSR_EL2 (which
would UNDEF without FEAT_NV or FEAT_NV2) should instead access the
equivalent EL1 registers SPSR_EL1, ELR_EL1, ESR_EL1, FAR_EL1 and
TFSR_EL1.

Because there are only five registers involved and the encoding for
the EL1 register is identical to that of the EL2 register except
that opc1 is 0, we handle this by finding the EL1 register in the
hash table and using it instead.

Note that traps that apply to direct accesses to the EL1 register,
such as active fine-grained traps or other trap bits, do not trigger
when it is accessed via the EL2 encoding in this way.  However, some
traps that are defined by the EL2 register may apply.  We therefore
call the EL2 register's accessfn first.  The only one of the five
which has such traps is TFSR_EL2: make sure its accessfn correctly
handles both FEAT_NV (where we trap to EL2 without checking ATA bits)
and FEAT_NV2 (where we check ATA bits and then redirect to TFSR_EL1).

(We don't need the NV1 tbflag bit until the next patch, but we
introduce it here to avoid putting the NV, NV1, NV2 bits in an
odd order.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:53 +00:00
Peter Maydell
ef8a4a8816 target/arm: Handle FEAT_NV2 changes to when SPSR_EL1.M reports EL2
With FEAT_NV2, the condition for when SPSR_EL1.M should report that
an exception was taken from EL2 changes.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:52 +00:00
Peter Maydell
b5ba6c99a8 target/arm: Implement VNCR_EL2 register
For FEAT_NV2, a new system register VNCR_EL2 holds the base
address of the memory which nested-guest system register
accesses are redirected to. Implement this register.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:52 +00:00
Peter Maydell
a13cd25d9b target/arm: Handle HCR_EL2 accesses for FEAT_NV2 bits
FEAT_NV2 defines another new bit in HCR_EL2: NV2. When the
feature is enabled, allow this bit to be written in HCR_EL2.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:52 +00:00
Peter Maydell
1274a47fbd target/arm: Add FEAT_NV to max, neoverse-n2, neoverse-v1 CPUs
Enable FEAT_NV on the 'max' CPU, and stop filtering it out for the
Neoverse N2 and Neoverse V1 CPUs.  We continue to downgrade FEAT_NV2
support to FEAT_NV for the latter two CPU types.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:51 +00:00
Peter Maydell
dea9104a4f target/arm: Handle FEAT_NV page table attribute changes
FEAT_NV requires that when HCR_EL2.{NV,NV1} == {1,1} the handling
of some of the page table attribute bits changes for the EL1&0
translation regime:

 * for block and page descriptors:
  - bit [54] holds PXN, not UXN
  - bit [53] is RES0, and the effective value of UXN is 0
  - bit [6], AP[1], is treated as 0
 * for table descriptors, when hierarchical permissions are enabled:
  - bit [60] holds PXNTable, not UXNTable
  - bit [59] is RES0
  - bit [61], APTable[0] is treated as 0

Implement these changes to the page table attribute handling.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:51 +00:00
Peter Maydell
2e9b1e50bd target/arm: Treat LDTR* and STTR* as LDR/STR when NV, NV1 is 1, 1
FEAT_NV requires (per I_JKLJK) that when HCR_EL2.{NV,NV1} is {1,1} the
unprivileged-access instructions LDTR, STTR etc behave as normal
loads and stores. Implement the check that handles this.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:51 +00:00
Peter Maydell
f11440b426 target/arm: Don't honour PSTATE.PAN when HCR_EL2.{NV, NV1} == {1, 1}
For FEAT_NV, when HCR_EL2.{NV,NV1} is {1,1} PAN is always disabled
even when the PSTATE.PAN bit is set. Implement this by having
arm_pan_enabled() return false in this situation.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:50 +00:00
Peter Maydell
7fda076357 target/arm: Always use arm_pan_enabled() when checking if PAN is enabled
Currently the code in target/arm/helper.c mostly checks the PAN bits
in env->pstate or env->uncached_cpsr directly when it wants to know
if PAN is enabled, because in most callsites we know whether we are
in AArch64 or AArch32. We do have an arm_pan_enabled() function, but
we only use it in a few places where the code might run in either an
AArch32 or AArch64 context.

For FEAT_NV, when HCR_EL2.{NV,NV1} is {1,1} PAN is always disabled
even when the PSTATE.PAN bit is set, the "is PAN enabled" test
becomes more complicated. Make all places that check for PAN use
arm_pan_enabled(), so we have a place to put the FEAT_NV test.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:50 +00:00
Peter Maydell
ad4e2d4db1 target/arm: Trap registers when HCR_EL2.{NV, NV1} == {1, 1}
When HCR_EL2.{NV,NV1} is {1,1} we must trap five extra registers to
EL2: VBAR_EL1, ELR_EL1, SPSR_EL1, SCXTNUM_EL1 and TFSR_EL1.
Implement these traps.

This trap does not apply when FEAT_NV2 is implemented and enabled;
include the check that HCR_EL2.NV2 is 0 here, to save us having
to come back and add it later.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:49 +00:00
Peter Maydell
29eda9cd19 target/arm: Set SPSR_EL1.M correctly when nested virt is enabled
FEAT_NV requires that when HCR_EL2.{NV,NV1} == {1,0} and an exception
is taken from EL1 to EL1 then the reported EL in SPSR_EL1.M should be
EL2, not EL1.  Implement this behaviour.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:49 +00:00
Peter Maydell
b7ecc3da6c target/arm: Make NV reads of CurrentEL return EL2
FEAT_NV requires that when HCR_EL2.NV is set reads of the CurrentEL
register from EL1 always report EL2 rather than the real EL.
Implement this.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:48 +00:00
Peter Maydell
67d10fc473 target/arm: Trap sysreg accesses for FEAT_NV
For FEAT_NV, accesses to system registers and instructions from EL1
which would normally UNDEF there but which work in EL2 need to
instead be trapped to EL2. Detect this both for "we know this will
UNDEF at translate time" and "we found this UNDEFs at runtime", and
make the affected registers trap to EL2 instead.

The Arm ARM defines the set of registers that should trap in terms
of their names; for our implementation this would be both awkward
and inefficent as a test, so we instead trap based on the opc1
field of the sysreg. The regularity of the architectural choice
of encodings for sysregs means that in practice this captures
exactly the correct set of registers.

Regardless of how we try to define the registers this trapping
applies to, there's going to be a certain possibility of breakage
if new architectural features introduce new registers that don't
follow the current rules (FEAT_MEC is one example already visible
in the released sysreg XML, though not yet in the Arm ARM). This
approach seems to me to be straightforward and likely to require
a minimum of manual overrides.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:48 +00:00
Peter Maydell
44572fc984 target/arm: Move FPU/SVE/SME access checks up above ARM_CP_SPECIAL_MASK check
In handle_sys() we don't do the check for whether the register is
marked as needing an FPU/SVE/SME access check until after we've
handled the special cases covered by ARM_CP_SPECIAL_MASK.  This is
conceptually the wrong way around, because if for example we happen
to implement an FPU-access-checked register as ARM_CP_NOP, we should
do the access check first.

Move the access checks up so they are with all the other access
checks, not sandwiched between the special-case read/write handling
and the normal-case read/write handling. This doesn't change
behaviour at the moment, because we happen not to define any
cpregs with both ARM_CPU_{FPU,SVE,SME} and one of the cases
dealt with by ARM_CP_SPECIAL_MASK.

Moving this code also means we have the correct place to put the
FEAT_NV/FEAT_NV2 access handling, which should come after the access
checks and before we try to do any read/write action.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:48 +00:00
Peter Maydell
83aea11db0 target/arm: Make EL2 cpreg accessfns safe for FEAT_NV EL1 accesses
FEAT_NV and FEAT_NV2 will allow EL1 to attempt to access cpregs that
only exist at EL2. This means we're going to want to run their
accessfns when the CPU is at EL1. In almost all cases, the behaviour
we want is "the accessfn returns OK if at EL1".

Mostly the accessfn already does the right thing; in a few cases we
need to explicitly check that the EL is not 1 before applying various
trap controls, or split out an accessfn used both for an _EL1 and an
_EL2 register into two so we can handle the FEAT_NV case correctly
for the _EL2 register.

There are two registers where we want the accessfn to trap for
a FEAT_NV EL1 access: VSTTBR_EL2 and VSTCR_EL2 should UNDEF
an access from NonSecure EL1, not trap to EL2 under FEAT_NV.
The way we have written sel2_access() already results in this
behaviour.

We can identify the registers we care about here because they
all have opc1 == 4 or 5.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:47 +00:00
Peter Maydell
e730287cef target/arm: *_EL12 registers should UNDEF when HCR_EL2.E2H is 0
The alias registers like SCTLR_EL12 only exist when HCR_EL2.E2H
is 1; they should UNDEF otherwise. We weren't implementing this.
Add an intercept of the accessfn for these aliases, and implement
the UNDEF check.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:47 +00:00
Peter Maydell
6f53b1267b target/arm: Record correct opcode fields in cpreg for E2H aliases
For FEAT_VHE, we define a set of register aliases, so that for instance:
 * the SCTLR_EL1 either accesses the real SCTLR_EL1, or (if E2H is 1)
   SCTLR_EL2
 * a new SCTLR_EL12 register accesses SCTLR_EL1 if E2H is 1

However when we create the 'new_reg' cpreg struct for the SCTLR_EL12
register, we duplicate the information in the SCTLR_EL1 cpreg, which
means the opcode fields are those of SCTLR_EL1, not SCTLR_EL12.  This
is a problem for code which looks at the cpreg opcode fields to
determine behaviour (e.g.  in access_check_cp_reg()). In practice
the current checks we do there don't intersect with the *_EL12
registers, but for FEAT_NV this will become a problem.

Write the correct values from the encoding into the new_reg struct.
This restores the invariant that the cpreg that you get back
from the hashtable has opcode fields that match the key you used
to retrieve it.

When we call the readfn or writefn for the target register, we
pass it the cpreg struct for that target register, not the one
for the alias, in case the readfn/writefn want to look at the
opcode fields to determine behaviour. This means we need to
interpose custom read/writefns for the e12 aliases.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:46 +00:00
Peter Maydell
29a15a6167 target/arm: Allow use of upper 32 bits of TBFLAG_A64
The TBFLAG_A64 TB flag bits go in flags2, which for AArch64 guests
we know is 64 bits. However at the moment we use FIELD_EX32() and
FIELD_DP32() to read and write these bits, which only works for
bits 0 to 31. Since we're about to add a flag that uses bit 32,
switch to FIELD_EX64() and FIELD_DP64() so that this will work.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:46 +00:00
Peter Maydell
b9377d1c5f target/arm: Always honour HCR_EL2.TSC when HCR_EL2.NV is set
The HCR_EL2.TSC trap for trapping EL1 execution of SMC instructions
has a behaviour change for FEAT_NV when EL3 is not implemented:

 * in older architecture versions TSC was required to have no
   effect (i.e. the SMC insn UNDEFs)
 * with FEAT_NV, when HCR_EL2.NV == 1 the trap must apply
   (i.e. SMC traps to EL2, as it already does in all cases when
   EL3 is implemented)
 * in newer architecture versions, the behaviour either without
   FEAT_NV or with FEAT_NV and HCR_EL2.NV == 0 is relaxed to
   an IMPDEF choice between UNDEF and trap-to-EL2 (i.e. it is
   permitted to always honour HCR_EL2.TSC) for AArch64 only

Add the condition to honour the trap bit when HCR_EL2.NV == 1.  We
leave the HCR_EL2.NV == 0 case with the existing (UNDEF) behaviour,
as our IMPDEF choice (both because it avoids a behaviour change
for older CPU models and because we'd have to distinguish AArch32
from AArch64 if we opted to trap to EL2).

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:46 +00:00
Peter Maydell
e37e98b7f9 target/arm: Enable trapping of ERET for FEAT_NV
When FEAT_NV is turned on via the HCR_EL2.NV bit, ERET instructions
are trapped, with the same syndrome information as for the existing
FEAT_FGT fine-grained trap (in the pseudocode this is handled in
AArch64.CheckForEretTrap()).

Rename the DisasContext and tbflag bits to reflect that they are
no longer exclusively for FGT traps, and set the tbflag bit when
FEAT_NV is enabled as well as when the FGT is enabled.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:45 +00:00
Peter Maydell
5725977915 target/arm: Implement HCR_EL2.AT handling
The FEAT_NV HCR_EL2.AT bit enables trapping of some address
translation instructions from EL1 to EL2.  Implement this behaviour.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:45 +00:00
Peter Maydell
67e55c73c3 target/arm: Handle HCR_EL2 accesses for bits introduced with FEAT_NV
FEAT_NV defines three new bits in HCR_EL2: NV, NV1 and AT.  When the
feature is enabled, allow these bits to be written, and flush the
TLBs for the bits which affect page table interpretation.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:44 +00:00
Peter Maydell
3d65b958c5 target/arm: Set CTR_EL0.{IDC,DIC} for the 'max' CPU
The CTR_EL0 register has some bits which allow the implementation to
tell the guest that it does not need to do cache maintenance for
data-to-instruction coherence and instruction-to-data coherence.
QEMU doesn't emulate caches and so our cache maintenance insns are
all NOPs.

We already have some models of specific CPUs where we set these bits
(e.g.  the Neoverse V1), but the 'max' CPU still uses the settings it
inherits from Cortex-A57.  Set the bits for 'max' as well, so the
guest doesn't need to do unnecessary work.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
2024-01-09 14:43:43 +00:00
Stefan Hajnoczi
a4a411fbaf Replace "iothread lock" with "BQL" in comments
The term "iothread lock" is obsolete. The APIs use Big QEMU Lock (BQL)
in their names. Update the code comments to use "BQL" instead of
"iothread lock".

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Paul Durrant <paul@xen.org>
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Message-id: 20240102153529.486531-5-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2024-01-08 10:45:43 -05:00
Stefan Hajnoczi
7c754c787e qemu/main-loop: rename qemu_cond_wait_iothread() to qemu_cond_wait_bql()
The name "iothread" is overloaded. Use the term Big QEMU Lock (BQL)
instead, it is already widely used and unambiguous.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Paul Durrant <paul@xen.org>
Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Message-id: 20240102153529.486531-4-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2024-01-08 10:45:43 -05:00
Stefan Hajnoczi
32ead8e62f qemu/main-loop: rename QEMU_IOTHREAD_LOCK_GUARD to BQL_LOCK_GUARD
The name "iothread" is overloaded. Use the term Big QEMU Lock (BQL)
instead, it is already widely used and unambiguous.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Acked-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Message-id: 20240102153529.486531-3-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2024-01-08 10:45:43 -05:00
Stefan Hajnoczi
195801d700 system/cpus: rename qemu_mutex_lock_iothread() to bql_lock()
The Big QEMU Lock (BQL) has many names and they are confusing. The
actual QemuMutex variable is called qemu_global_mutex but it's commonly
referred to as the BQL in discussions and some code comments. The
locking APIs, however, are called qemu_mutex_lock_iothread() and
qemu_mutex_unlock_iothread().

The "iothread" name is historic and comes from when the main thread was
split into into KVM vcpu threads and the "iothread" (now called the main
loop thread). I have contributed to the confusion myself by introducing
a separate --object iothread, a separate concept unrelated to the BQL.

The "iothread" name is no longer appropriate for the BQL. Rename the
locking APIs to:
- void bql_lock(void)
- void bql_unlock(void)
- bool bql_locked(void)

There are more APIs with "iothread" in their names. Subsequent patches
will rename them. There are also comments and documentation that will be
updated in later patches.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Acked-by: Fabiano Rosas <farosas@suse.de>
Acked-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Acked-by: Peter Xu <peterx@redhat.com>
Acked-by: Eric Farman <farman@linux.ibm.com>
Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Acked-by: Hyman Huang <yong.huang@smartx.com>
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Message-id: 20240102153529.486531-2-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2024-01-08 10:45:43 -05:00
Peter Maydell
33252ebde1 trivial patches for 2024-01-05
-----BEGIN PGP SIGNATURE-----
 
 iQFDBAABCAAtFiEEe3O61ovnosKJMUsicBtPaxppPlkFAmWYWJEPHG1qdEB0bHMu
 bXNrLnJ1AAoJEHAbT2saaT5Z4PEH/2vA3XIPf96IlrZilBFIOYfb8wkw6AGI7BG8
 R3xps+j4ih/RreQdJzswFzfCDaBZvdEPlHtu3YFsIKqfa/svLdVU6GKqjNiDq6XY
 FvoQAUZCSg6NaF8Xgd4AETcw7FedW0nodDzpE/jBj5WQjd1eJoD26uF4cYicVzIt
 gtb6tJJ3LtYc0pNIzxk2hPFTUrXTpfA5kdIADmd6Tg1sH87JJpWnmR49/a89Kpst
 mU/j2KtmqL94YFH93qbkNQ2jkcnQ6DimsOpgPBNVMmKdXSUA9eF3DHo54nzIbhnN
 rvWXiUp6d7EjyqTI0IquuajFnlRBRyn4VvtJPbxuzr78GH8XJ9o=
 =Iz+M
 -----END PGP SIGNATURE-----

Merge tag 'pull-trivial-patches' of https://gitlab.com/mjt0k/qemu into staging

trivial patches for 2024-01-05

# -----BEGIN PGP SIGNATURE-----
#
# iQFDBAABCAAtFiEEe3O61ovnosKJMUsicBtPaxppPlkFAmWYWJEPHG1qdEB0bHMu
# bXNrLnJ1AAoJEHAbT2saaT5Z4PEH/2vA3XIPf96IlrZilBFIOYfb8wkw6AGI7BG8
# R3xps+j4ih/RreQdJzswFzfCDaBZvdEPlHtu3YFsIKqfa/svLdVU6GKqjNiDq6XY
# FvoQAUZCSg6NaF8Xgd4AETcw7FedW0nodDzpE/jBj5WQjd1eJoD26uF4cYicVzIt
# gtb6tJJ3LtYc0pNIzxk2hPFTUrXTpfA5kdIADmd6Tg1sH87JJpWnmR49/a89Kpst
# mU/j2KtmqL94YFH93qbkNQ2jkcnQ6DimsOpgPBNVMmKdXSUA9eF3DHo54nzIbhnN
# rvWXiUp6d7EjyqTI0IquuajFnlRBRyn4VvtJPbxuzr78GH8XJ9o=
# =Iz+M
# -----END PGP SIGNATURE-----
# gpg: Signature made Fri 05 Jan 2024 19:29:21 GMT
# gpg:                using RSA key 7B73BAD68BE7A2C289314B22701B4F6B1A693E59
# gpg:                issuer "mjt@tls.msk.ru"
# gpg: Good signature from "Michael Tokarev <mjt@tls.msk.ru>" [full]
# gpg:                 aka "Michael Tokarev <mjt@corpit.ru>" [full]
# gpg:                 aka "Michael Tokarev <mjt@debian.org>" [full]
# Primary key fingerprint: 6EE1 95D1 886E 8FFB 810D  4324 457C E0A0 8044 65C5
#      Subkey fingerprint: 7B73 BAD6 8BE7 A2C2 8931  4B22 701B 4F6B 1A69 3E59

* tag 'pull-trivial-patches' of https://gitlab.com/mjt0k/qemu:
  docs: use "buses" rather than "busses"
  edu: fix DMA range upper bound check
  hw/net: cadence_gem: Fix MDIO_OP_xxx values
  audio/audio.c: remove trailing newline in error_setg
  chardev/char.c: fix "abstract device type" error message
  target/riscv: Fix mcycle/minstret increment behavior

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-01-08 10:28:26 +00:00
Song Gao
5c23704e47 target/loongarch: move translate modules to tcg/
Introduce the target/loongarch/tcg directory. Its purpose is to hold the TCG
code that is selected by CONFIG_TCG

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20240102020200.3462097-2-gaosong@loongson.cn>
2024-01-06 10:18:52 +08:00
Song Gao
beb60920a1 target/loongarch/meson: move gdbstub.c to loongarch.ss
gdbstub.c is not specific to TCG and can be used by
other accelerators, such as KVM accelerator

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20240102020200.3462097-1-gaosong@loongson.cn>
2024-01-06 10:15:09 +08:00
Xu Lu
5cb0e7abe1 target/riscv: Fix mcycle/minstret increment behavior
The mcycle/minstret counter's stop flag is mistakenly updated on a copy
on stack. Thus the counter increments even when the CY/IR bit in the
mcountinhibit register is set. This commit corrects its behavior.

Fixes: 3780e33732 (target/riscv: Support mcycle/minstret write operation)
Signed-off-by: Xu Lu <luxu.kernel@bytedance.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-01-05 22:28:54 +03:00
Clément Chigot
a318da6b3f target/sparc: Simplify qemu_irq_ack
This is a simple cleanup, since env is passed to qemu_irq_ack it can be
accessed from inside qemu_irq_ack.  Just drop this parameter.

Co-developed-by: Frederic Konrad <konrad.frederic@yahoo.fr>
Signed-off-by: Clément Chigot <chigot@adacore.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20240105102421.163554-7-chigot@adacore.com>
2024-01-05 16:20:15 +01:00
Gavin Shan
4b26aa9f3a target: Use generic cpu_model_from_type()
Use generic cpu_model_from_type() when the CPU model name needs to
be extracted from the CPU type name.

Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20231114235628.534334-23-gshan@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-01-05 16:20:14 +01:00
Gavin Shan
f08f4c8ea4 target/xtensa: Use generic cpu_list()
Before it's applied:

[gshan@gshan q]$ ./build/qemu-system-xtensa -cpu ?
Available CPUs:
  test_mmuhifi_c3
  sample_controller
  lx106
  dsp3400
  de233_fpu
  de212
  dc233c
  dc232b

After it's applied:

[gshan@gshan q]$ ./build/qemu-system-xtensa -cpu ?
Available CPUs:
  dc232b
  dc233c
  de212
  de233_fpu
  dsp3400
  lx106
  sample_controller
  test_mmuhifi_c3

Signed-off-by: Gavin Shan <gshan@redhat.com>
Message-ID: <20231114235628.534334-22-gshan@redhat.com>
[PMD: Split patch in 2, only include the "Use generic cpu_list" change]
Message-ID: <51ffd060-b2f8-405c-83e1-a0663c0183f5@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-01-05 16:20:14 +01:00
Gavin Shan
40b807e26c target/tricore: Use generic cpu_list()
No changes in the output from the following command.

[gshan@gshan q]$ ./build/qemu-system-tricore -cpu ?
Available CPUs:
  tc1796
  tc1797
  tc27x
  tc37x

Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20231114235628.534334-21-gshan@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-01-05 16:20:14 +01:00
Gavin Shan
08536d1175 target/sh4: Use generic cpu_list()
Before it's applied:

[gshan@gshan q]$ ./build/qemu-system-sh4 -cpu ?
sh7750r
sh7751r
sh7785

After it's applied:

[gshan@gshan q]$ ./build/qemu-system-sh4 -cpu ?
Available CPUs:
  sh7750r
  sh7751r
  sh7785

Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20231114235628.534334-20-gshan@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-01-05 16:20:14 +01:00
Gavin Shan
c16de0d9fd target/rx: Use generic cpu_list()
Before it's applied:

[gshan@gshan q]$ ./build/qemu-system-rx -cpu ?
Available CPUs:
  rx62n-rx-cpu

After it's applied:

[gshan@gshan q]$ ./build/qemu-system-rx -cpu ?
Available CPUs:
  rx62n

Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20231114235628.534334-19-gshan@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-01-05 16:20:14 +01:00
Gavin Shan
3144fbc942 target/riscv: Use generic cpu_list()
Before it's applied:

[gshan@gshan q]$ ./build/qemu-system-riscv64 -cpu ?
any
max
rv64
shakti-c
sifive-e51
sifive-u54
thead-c906
veyron-v1
x-rv128

After it's applied:

[gshan@gshan q]$ ./build/qemu-system-riscv64 -cpu ?
Available CPUs:
  any
  max
  rv64
  shakti-c
  sifive-e51
  sifive-u54
  thead-c906
  veyron-v1
  x-rv128

Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20231114235628.534334-18-gshan@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-01-05 16:20:14 +01:00
Gavin Shan
72b381f133 target/openrisc: Use generic cpu_list()
Before it's applied:

[gshan@gshan q]$ ./build/qemu-or1k -cpu ?
Available CPUs:
  or1200
  any

After it's applied:

[gshan@gshan q]$ ./build/qemu-or1k -cpu ?
Available CPUs:
  any
  or1200

Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20231114235628.534334-17-gshan@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-01-05 16:20:14 +01:00
Gavin Shan
31c5147010 target/mips: Use generic cpu_list()
Before it's applied:

[gshan@gshan q]$ ./build/qemu-system-mips64 -cpu ?
MIPS '4Kc'
MIPS '4Km'
MIPS '4KEcR1'
MIPS 'XBurstR1'
MIPS '4KEmR1'
MIPS '4KEc'
MIPS '4KEm'
MIPS '24Kc'
MIPS '24KEc'
MIPS '24Kf'
MIPS '34Kf'
MIPS '74Kf'
MIPS 'XBurstR2'
MIPS 'M14K'
MIPS 'M14Kc'
MIPS 'P5600'
MIPS 'mips32r6-generic'
MIPS 'I7200'
MIPS 'R4000'
MIPS 'VR5432'
MIPS '5Kc'
MIPS '5Kf'
MIPS '20Kc'
MIPS 'MIPS64R2-generic'
MIPS '5KEc'
MIPS '5KEf'
MIPS 'I6400'
MIPS 'I6500'
MIPS 'Loongson-2E'
MIPS 'Loongson-2F'
MIPS 'Loongson-3A1000'
MIPS 'Loongson-3A4000'
MIPS 'mips64dspr2'
MIPS 'Octeon68XX'

After it's applied:

[gshan@gshan q]$ ./build/qemu-system-mips64 -cpu ?
Available CPUs:
  20Kc
  24Kc
  24KEc
  24Kf
  34Kf
  4Kc
  4KEc
  4KEcR1
  4KEm
  4KEmR1
  4Km
  5Kc
  5KEc
  5KEf
  5Kf
  74Kf
  I6400
  I6500
  I7200
  Loongson-2E
  Loongson-2F
  Loongson-3A1000
  Loongson-3A4000
  M14K
  M14Kc
  mips32r6-generic
  mips64dspr2
  MIPS64R2-generic
  Octeon68XX
  P5600
  R4000
  VR5432
  XBurstR1
  XBurstR2

Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20231114235628.534334-16-gshan@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-01-05 16:20:14 +01:00
Gavin Shan
261f406db9 target/m68k: Use generic cpu_list()
Before it's applied:

[gshan@gshan q]$ ./build/qemu-system-m68k -cpu ?
cfv4e
m5206
m5208
m68000
m68010
m68020
m68030
m68040
m68060
any

After it's applied:

[gshan@gshan q]$ ./build/qemu-system-m68k -cpu ?
Available CPUs:
  any
  cfv4e
  m5206
  m5208
  m68000
  m68010
  m68020
  m68030
  m68040
  m68060

Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20231114235628.534334-15-gshan@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-01-05 16:20:14 +01:00
Gavin Shan
979bf44af8 target/loongarch: Use generic cpu_list()
Before it's applied:

[gshan@gshan q]$ ./build/qemu-system-loongarch64 -cpu ?
la132-loongarch-cpu
la464-loongarch-cpu
max-loongarch-cpu

After it's applied:

[gshan@gshan q]$ ./build/qemu-system-loongarch64 -cpu ?
Available CPUs:
  la132
  la464
  max

Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20231114235628.534334-14-gshan@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-01-05 16:20:14 +01:00
Gavin Shan
d33fc716dc target/hppa: Use generic cpu_list()
No changes in the output from the following command.

[gshan@gshan q]$ ./build/qemu-system-hppa -cpu ?
Available CPUs:
  hppa
  hppa64

Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20231114235628.534334-13-gshan@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-01-05 16:20:14 +01:00
Gavin Shan
ee0b8ced56 target/hexagon: Use generic cpu_list()
No changes in the output from the following command.

[gshan@gshan q]$ ./build/qemu-hexagon -cpu ?
Available CPUs:
  v67
  v68
  v69
  v71
  v73

Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20231114235628.534334-12-gshan@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-01-05 16:20:14 +01:00
Gavin Shan
dd447f0439 target/cris: Use generic cpu_list()
Before it's applied:

[gshan@gshan q]$ ./build/qemu-system-cris -cpu ?
Available CPUs:
  crisv8
  crisv9
  crisv10
  crisv11
  crisv17
  crisv32

After it's applied:

[gshan@gshan q]$ ./build/qemu-system-cris -cpu ?
Available CPUs:
  crisv10
  crisv11
  crisv17
  crisv32
  crisv8
  crisv9

Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20231114235628.534334-11-gshan@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-01-05 16:20:14 +01:00
Gavin Shan
7db8f7e895 target/avr: Use generic cpu_list()
Before it's applied:

[gshan@gshan q]$ ./build/qemu-system-avr -cpu ?
avr5-avr-cpu
avr51-avr-cpu
avr6-avr-cpu

After it's applied:

[gshan@gshan q]$ ./build/qemu-system-avr -cpu ?
Available CPUs:
  avr5
  avr51
  avr6

Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20231114235628.534334-10-gshan@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-01-05 16:20:14 +01:00
Gavin Shan
b5154a2d61 target/arm: Use generic cpu_list()
No changes of the output from the following command before and
after it's applied.

[gshan@gshan q]$ ./build/qemu-system-aarch64 -cpu ?
Available CPUs:
  a64fx
  arm1026
  arm1136
  arm1136-r2
  arm1176
  arm11mpcore
  arm926
  arm946
  cortex-a15
  cortex-a35
  cortex-a53
  cortex-a55
  cortex-a57
  cortex-a7
  cortex-a710
  cortex-a72
  cortex-a76
  cortex-a8
  cortex-a9
  cortex-m0
  cortex-m3
  cortex-m33
  cortex-m4
  cortex-m55
  cortex-m7
  cortex-r5
  cortex-r52
  cortex-r5f
  max
  neoverse-n1
  neoverse-n2
  neoverse-v1
  pxa250
  pxa255
  pxa260
  pxa261
  pxa262
  pxa270-a0
  pxa270-a1
  pxa270
  pxa270-b0
  pxa270-b1
  pxa270-c0
  pxa270-c5
  sa1100
  sa1110
  ti925t

Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20231114235628.534334-9-gshan@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-01-05 16:20:14 +01:00
Gavin Shan
51d49bd1db target/alpha: Use generic cpu_list()
Before it's applied:

[gshan@gshan q]$ ./build/qemu-system-alpha -cpu ?
Available CPUs:
  ev4-alpha-cpu
  ev5-alpha-cpu
  ev56-alpha-cpu
  ev6-alpha-cpu
  ev67-alpha-cpu
  ev68-alpha-cpu
  pca56-alpha-cpu

After it's applied:

[gshan@gshan q]$ ./build/qemu-system-alpha -cpu ?
Available CPUs:
  ev4
  ev5
  ev56
  ev6
  ev67
  ev68
  pca56

Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20231114235628.534334-8-gshan@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-01-05 16:20:14 +01:00
Philippe Mathieu-Daudé
d5be19f514 cpu: Call object_class_dynamic_cast() once in cpu_class_by_name()
For all targets, the CPU class returned from CPUClass::class_by_name()
and object_class_dynamic_cast(oc, CPU_RESOLVING_TYPE) need to be
compatible. Lets apply the check in cpu_class_by_name() for once,
instead of having the check in CPUClass::class_by_name() for individual
target.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Gavin Shan <gshan@redhat.com>
Message-ID: <20231114235628.534334-4-gshan@redhat.com>
2024-01-05 16:20:14 +01:00
Gavin Shan
b0b8fa1814 target/hppa: Remove object_class_is_abstract()
Since commit 3a9d0d7b64 ("hw/cpu: Call object_class_is_abstract()
once in cpu_class_by_name()"), there is no need to check if @oc is
abstract because it has been covered by cpu_class_by_name().

Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20231114235628.534334-3-gshan@redhat.com>
[PMD: Mention commit 3a9d0d7b64]
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-01-05 16:20:14 +01:00
Gavin Shan
9c115f68e2 target/alpha: Remove fallback to ev67 cpu class
'ev67' CPU class will be returned to match everything, which makes
no sense as mentioned in the comments. Remove the logic to fall
back to 'ev67' CPU class to match everything.

Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20231114235628.534334-2-gshan@redhat.com>
[PMD: Reword subject, replace 'any' -> 'ev67' on linux-user]
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-01-05 16:20:14 +01:00
Peter Maydell
05470c3979 * configure: use a native non-cross compiler for linux-user
* meson: cleanups
 * target/i386: miscellaneous cleanups and optimizations
 * target/i386: implement CMPccXADD
 * target/i386: the sgx_epc_get_section stub is reachable
 * esp: check for NULL result from scsi_device_find()
 -----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmWRImYUHHBib256aW5p
 QHJlZGhhdC5jb20ACgkQv/vSX3jHroNd7AgAgcyJGiMfUkXqhefplpm06RDXQIa8
 FuoJqPb21lO75DQKfaFRAc4xGLagjJROMJGHMm9HvMu2VlwvOydkQlfFRspENxQ/
 5XzGdb/X0A7HA/mwUfnMB1AZx0Vs32VI5IBSc6acc9fmgeZ84XQEoM3KBQHUik7X
 mSkE4eltR9gJ+4IaGo4voZtK+YoVD8nEcuqmnKihSPWizev0FsZ49aNMtaYa9qC/
 Xs3kiQd/zPibHDHJu0ulFsNZgxtUcvlLHTCf8gO4dHWxCFLXGubMush83McpRtNB
 Qoh6cTLH+PBXfrxMR3zmTZMNvo8Euls3s07Y8TkNP4vdIIE/kMeMDW1wJw==
 =mq30
 -----END PGP SIGNATURE-----

Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging

* configure: use a native non-cross compiler for linux-user
* meson: cleanups
* target/i386: miscellaneous cleanups and optimizations
* target/i386: implement CMPccXADD
* target/i386: the sgx_epc_get_section stub is reachable
* esp: check for NULL result from scsi_device_find()

# -----BEGIN PGP SIGNATURE-----
#
# iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmWRImYUHHBib256aW5p
# QHJlZGhhdC5jb20ACgkQv/vSX3jHroNd7AgAgcyJGiMfUkXqhefplpm06RDXQIa8
# FuoJqPb21lO75DQKfaFRAc4xGLagjJROMJGHMm9HvMu2VlwvOydkQlfFRspENxQ/
# 5XzGdb/X0A7HA/mwUfnMB1AZx0Vs32VI5IBSc6acc9fmgeZ84XQEoM3KBQHUik7X
# mSkE4eltR9gJ+4IaGo4voZtK+YoVD8nEcuqmnKihSPWizev0FsZ49aNMtaYa9qC/
# Xs3kiQd/zPibHDHJu0ulFsNZgxtUcvlLHTCf8gO4dHWxCFLXGubMush83McpRtNB
# Qoh6cTLH+PBXfrxMR3zmTZMNvo8Euls3s07Y8TkNP4vdIIE/kMeMDW1wJw==
# =mq30
# -----END PGP SIGNATURE-----
# gpg: Signature made Sun 31 Dec 2023 08:12:22 GMT
# gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg:                issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
#      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83

* tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (46 commits)
  meson.build: report graphics backends separately
  configure, meson: rename targetos to host_os
  meson: rename config_all
  meson: remove CONFIG_ALL
  meson: remove config_targetos
  meson: remove CONFIG_POSIX and CONFIG_WIN32 from config_targetos
  meson: remove OS definitions from config_targetos
  meson: always probe u2f and canokey if the option is enabled
  meson: move subdirs to "Collect sources" section
  meson: move config-host.h definitions together
  meson: move CFI detection code with other compiler flags
  meson: keep subprojects together
  meson: move accelerator dependency checks together
  meson: move option validation together
  meson: move program checks together
  meson: add more sections to main meson.build
  configure: unify again the case arms in probe_target_compiler
  configure: remove unnecessary subshell
  Makefile: clean qemu-iotests output
  meson: use version_compare() to compare version
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-01-04 19:55:20 +00:00
Paolo Bonzini
cfc1a889e5 meson: rename config_all
config_all now lists only accelerators, rename it to indicate its actual
content.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-31 09:11:28 +01:00
Paolo Bonzini
405c7c0708 target/i386: implement CMPccXADD
The main difficulty here is that a page fault when writing to the destination
must not overwrite the flags.  Therefore, the flags computation must be
inlined instead of using gen_jcc1*.

For simplicity, I am using an unconditional cmpxchg operation, that becomes
a NOP if the comparison fails.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:04:40 +01:00
Paolo Bonzini
e7bbb7cb71 target/i386: introduce flags writeback mechanism
ALU instructions can write to both memory and flags.  If the CC_SRC*
and CC_DST locations have been written already when a memory access
causes a fault, the value in CC_SRC* and CC_DST might be interpreted
with the wrong CC_OP (the one that is in effect before the instruction.

Besides just using the wrong result for the flags, something like
subtracting -1 can have disastrous effects if the current CC_OP is
CC_OP_EFLAGS: this is because QEMU does not expect bits outside the ALU
flags to be set in CC_SRC, and env->eflags can end up set to all-ones.
In the case of the attached testcase, this sets IOPL to 3 and would
cause an assertion failure if SUB is moved to the new decoder.

This mechanism is not really needed for BMI instructions, which can
only write to a register, but put it to use anyway for cleanliness.
In the case of BZHI, the code has to be modified slightly to ensure
that decode->cc_src is written, otherwise the new assertions trigger.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:04:30 +01:00
Paolo Bonzini
4b2baf4a55 target/i386: adjust decoding of J operand
gen_jcc() has been changed to accept a relative offset since the
new decoder was written.  Adjust the J operand, which is meant
to be used with jump instructions such as gen_jcc(), to not
include the program counter and to not truncate the result, as
both operations are now performed by common code.

The result is that J is now the same as the I operand.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:04:30 +01:00
Paolo Bonzini
d4f611711a target/i386: move operand load and writeback out of gen_cmovcc1
Similar to gen_setcc1, make gen_cmovcc1 receive TCGv.  This is more friendly
to simultaneous implementation in the old and the new decoder.

A small wart is that s->T0 of CMOV is currently the *second* argument (which
would ordinarily be in T1).  Therefore, the condition has to be inverted in
order to overwrite s->T0 with cpu_regs[reg] if the MOV is not performed.

This only applies to the old decoder, and this code will go away soon.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:04:15 +01:00
Paolo Bonzini
3497f1646f target/i386: prepare for implementation of STOS/SCAS in new decoder
Do not use gen_op, and pull the load from the accumulator into
disas_insn.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:03:59 +01:00
Paolo Bonzini
9a5922d6bd target/i386: do not use s->tmp0 for jumps on ECX ==/!= 0
Create a new temporary, to ease the register allocator's work.

Creation of the temporary is pushed into gen_ext_tl, which
also allows NULL as the first parameter now.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:03:55 +01:00
Paolo Bonzini
1ec46bf237 target/i386: do not use s->tmp4 for push
Just create a temporary for the occasion.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:03:53 +01:00
Paolo Bonzini
80e55f54ac target/i386: split eflags computation out of gen_compute_eflags
The new x86 decoder wants the gen_* functions to compute EFLAGS before
writeback, which can be an issue for instructions with a memory
destination such as ARPL or shifts.

Extract code to compute the EFLAGS without clobbering CC_SRC, in case
the memory write causes a fault.  The flags writeback mechanism will
take care of copying the result to CC_SRC.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:03:50 +01:00
Paolo Bonzini
c0099cd40e target/i386: do not clobber T0 on string operations
The new decoder would rather have the operand in T0 when expanding SCAS, rather
than use R_EAX directly as gen_scas currently does.  This makes SCAS more similar
to CMP and SUB, in that CC_DST = T0 - T1.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:03:24 +01:00
Paolo Bonzini
24c0573bb0 target/i386: do not clobber A0 in POP translation
The new decoder likes to compute the address in A0 very early, so the
gen_lea_v_seg in gen_pop_T0 would clobber the address of the memory
operand.  Instead use T0 since it is already available and will be
overwritten immediately after.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:03:21 +01:00
Paolo Bonzini
a71e0b246a target/i386: do not decode string source/destination into decode->mem
decode->mem is only used if one operand has has_ea == true.  String
operations will not use decode->mem and will load A0 on their own, because
they are the only case of two memory operands in a single instruction.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:03:18 +01:00
Paolo Bonzini
8a36bbcf6c target/i386: add X86_SPECIALs for MOVSX and MOVZX
Usually the registers are just moved into s->T0 without much care for
their operand size.  However, in some cases we can get more efficient
code if the operand fetching logic syncs with the emission function
on what is nicer.

All the current uses are mostly demonstrative and only reduce the code
in the emission functions, because the instructions do not support
memory operands.  However the logic is generic and applies to several
more instructions such as MOVSXD (aka movslq), one-byte shift
instructions, multiplications, XLAT, and indirect calls/jumps.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:03:15 +01:00
Paolo Bonzini
5baf5641cc target/i386: rename zext0/zext2 and make them closer to the manual
X86_SPECIAL_ZExtOp0 and X86_SPECIAL_ZExtOp2 are poorly named; they are a hack
that is needed by scalar insertion and extraction instructions, and not really
related to zero extension: for PEXTR the zero extension is done by the generation
functions, for PINSR the high bits are not used at all and in fact are *not*
filled with zeroes when loaded into s->T1.

Rename the values to match the effect described in the manual, and explain
better in the comments.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:03:11 +01:00
Paolo Bonzini
6dd2afed55 target/i386: avoid trunc and ext for MULX and RORX
Use _tl operations for 32-bit operands on 32-bit targets, and only go
through trunc and extu ops for 64-bit targets.  While the trunc/ext
ops should be pretty much free after optimization, the optimizer also
does not like having the same temporary used in multiple EBBs.
Therefore it is nicer to not use tmpN* unless necessary.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:03:08 +01:00
Paolo Bonzini
b609db9477 target/i386: reimplement check for validity of LOCK prefix
The previous check erroneously allowed CMP to be modified with LOCK.
Instead, tag explicitly the instructions that do support LOCK.

Acked-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:03:08 +01:00
Paolo Bonzini
8147df44da target/i386: document more deviations from the manual
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:03:05 +01:00
Paolo Bonzini
2455e9cf5a target/i386: clean up cpu_cc_compute_all
cpu_cc_compute_all() has an argument that is always equal to CC_OP for historical
reasons (dating back to commit a7812ae412, "TCG variable type checking.", 2008-11-17,
which added the argument to helper_cc_compute_all).  It does not make sense for the
argument to have any other value, so remove it and clean up some lines that are not
too long anymore.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:03:02 +01:00
Paolo Bonzini
8cc746525c target/i386: remove unnecessary truncations
gen_lea_v_seg (called by gen_add_A0_ds_seg) already zeroes any
bits of s->A0 beyond s->aflag.  It does so before summing the
segment base and, if not in 64-bit mode, also after summing it.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:02:58 +01:00
Paolo Bonzini
83280f6a62 target/i386: remove unnecessary arguments from raise_interrupt
is_int is always 1, and error_code is always zero.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:02:55 +01:00
Paolo Bonzini
1e7dde8008 target/i386: speedup JO/SETO after MUL or IMUL
OF is equal to the carry flag, so use the same CCPrepare.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:02:52 +01:00
Paolo Bonzini
6032627f07 target/i386: optimize computation of JL and JLE from flags
Take advantage of the fact that there can be no 1 bits between SF and OF.
If they were adjacent, you could sum SF and get a carry only if SF was
already set.  Then the value of OF in the sum is the XOR of OF itself,
the carry (which is SF) and 0 (the value of the OF bit in the addend):
this is OF^SF exactly.

Because OF and SF are not adjacent, just place more 1 bits to the
left so that the carry propagates, which means summing CC_O - CC_S.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-12-29 22:02:48 +01:00
Richard Henderson
dd9729b302 target/sparc: Constify VMState in machine.c
Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20231221031652.119827-18-richard.henderson@linaro.org>
2023-12-29 11:17:30 +11:00
Richard Henderson
5c04ea96e4 target/s390x: Constify VMState in machine.c
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20231221031652.119827-17-richard.henderson@linaro.org>
2023-12-29 11:17:30 +11:00
Richard Henderson
85b57d3d54 target/riscv: Constify VMState in machine.c
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20231221031652.119827-16-richard.henderson@linaro.org>
2023-12-29 11:17:30 +11:00
Richard Henderson
caae239633 target/ppc: Constify VMState in machine.c
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20231221031652.119827-15-richard.henderson@linaro.org>
2023-12-29 11:17:30 +11:00
Richard Henderson
c9e763b010 target/openrisc: Constify VMState in machine.c
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20231221031652.119827-14-richard.henderson@linaro.org>
2023-12-29 11:17:30 +11:00
Richard Henderson
6db6de6506 target/mips: Constify VMState in machine.c
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20231221031652.119827-13-richard.henderson@linaro.org>
2023-12-29 11:17:30 +11:00
Richard Henderson
61d5442a9a target/microblaze: Constify VMState in machine.c
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20231221031652.119827-12-richard.henderson@linaro.org>
2023-12-29 11:17:30 +11:00