Commit Graph

126 Commits

Author SHA1 Message Date
Richard Henderson
41e4c0a9ad tcg/riscv: Implement negsetcond_*
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24 11:22:42 -07:00
Richard Henderson
3635502dd0 tcg: Introduce negsetcond opcodes
Introduce a new opcode for negative setcond.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24 11:22:42 -07:00
Richard Henderson
13d885b0ad tcg: Unify TCG_TARGET_HAS_extr[lh]_i64_i32
Replace the separate defines with TCG_TARGET_HAS_extr_i64_i32,
so that the two parts of backend-specific type changing cannot
be out of sync.

Reported-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: <20230822175127.1173698-1-richard.henderson@linaro.org>
2023-08-24 11:22:42 -07:00
Richard Henderson
d46259c037 tcg: Split out tcg-target-reg-bits.h
Often, the only thing we need to know about the TCG host
is the register size.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-05 12:04:28 -07:00
Richard Henderson
d0a9bb5ecb tcg: Add tlb_fast_offset to TCGContext
Disconnect the layout of ArchCPU from TCG compilation.
Pass the relative offset of 'env' and 'neg.tlb.f' as a parameter.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-05 12:04:28 -07:00
Richard Henderson
238f43809a tcg: Widen CPUTLBEntry comparators to 64-bits
This makes CPUTLBEntry agnostic to the address size of the guest.
When 32-bit addresses are in effect, we can simply read the low
32 bits of the 64-bit field.  Similarly when we need to update
the field for setting TLB_NOTDIRTY.

For TCG backends that could in theory be big-endian, but in
practice are not (arm, loongarch, riscv), use QEMU_BUILD_BUG_ON
to document and ensure this is not accidentally missed.

For s390x, which is always big-endian, use HOST_BIG_ENDIAN anyway,
to document the reason for the adjustment.

For sparc64 and ppc64, always perform a 64-bit load, and rely on
the following 32-bit comparison to ignore the high bits.

Rearrange mips and ppc if ladders for clarity.

Reviewed-by: Anton Johansson <anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-05 12:04:28 -07:00
Richard Henderson
8aefe1fb8a tcg/riscv: Remove TARGET_LONG_BITS, TCG_TYPE_TL
All uses replaced with TCGContext.addr_type.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-05 12:04:28 -07:00
Richard Henderson
194339461b tcg: Remove TCG_TARGET_TLB_DISPLACEMENT_BITS
The last use was removed by e77c89fb08.

Fixes: e77c89fb08 ("cputlb: Remove static tlb sizing")
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-30 09:51:51 -07:00
Richard Henderson
a30498fcea tcg/riscv: Support CTZ, CLZ from Zbb
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-25 15:29:36 +00:00
Richard Henderson
a18d783e64 tcg/riscv: Implement movcond
Implement with and without Zicond.  Without Zicond, we were letting
the middle-end expand to a 5 insn sequence; better to use a branch
over a single insn.

Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-25 15:29:36 +00:00
Richard Henderson
f6453695f9 tcg/riscv: Improve setcond expansion
Split out a helper function, tcg_out_setcond_int, which does not
always produce the complete boolean result, but returns a set of
flags to do so.

Based on 21af161984, the same improvement for loongarch64.

Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-25 15:29:30 +00:00
Richard Henderson
0956ecda9f tcg/riscv: Support CPOP from Zbb
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-25 13:57:53 +00:00
Richard Henderson
7b4d527427 tcg/riscv: Support REV8 from Zbb
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-25 13:57:53 +00:00
Richard Henderson
19d016ad97 tcg/riscv: Support rotates from Zbb
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-25 13:57:53 +00:00
Richard Henderson
eda1515996 tcg/riscv: Use ADD.UW for guest address generation
The instruction is a combined zero-extend and add.
Use it for exactly that.

Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-25 13:57:52 +00:00
Richard Henderson
d1c3f4e9ed tcg/riscv: Support ADD.UW, SEXT.B, SEXT.H, ZEXT.H from Zba+Zbb
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-25 13:57:52 +00:00
Richard Henderson
99f4ec6eab tcg/riscv: Support ANDN, ORN, XNOR from Zbb
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-25 13:57:52 +00:00
Richard Henderson
9e3e0bc6ac tcg/riscv: Probe for Zba, Zbb, Zicond extensions
Define a useful subset of the extensions.  Probe for them
via compiler pre-processor feature macros and SIGILL.

Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-25 13:57:52 +00:00
Richard Henderson
aece72b76b tcg: Add page_bits and page_mask to TCGContext
Disconnect guest page size from TCG compilation.
While this could be done via exec/target_page.h, we want to cache
the value across multiple memory access operations, so we might
as well initialize this early.

The changes within tcg/ are entirely mechanical:

    sed -i s/TARGET_PAGE_BITS/s->page_bits/g
    sed -i s/TARGET_PAGE_MASK/s->page_mask/g

Reviewed-by: Anton Johansson <anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16 20:13:51 -07:00
Richard Henderson
fecccfcc54 tcg: Split INDEX_op_qemu_{ld,st}* for guest address size
For 32-bit hosts, we cannot simply rely on TCGContext.addr_bits,
as we need one or two host registers to represent the guest address.

Create the new opcodes and update all users.  Since we have not
yet eliminated TARGET_LONG_BITS, only one of the two opcodes will
ever be used, so we can get away with treating them the same in
the backends.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16 20:07:20 -07:00
Richard Henderson
37e523f04b tcg/riscv: Use atom_and_align_for_opc
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16 16:30:29 -07:00
Richard Henderson
12fde9bcdb tcg: Add INDEX_op_qemu_{ld,st}_i128
Add opcodes for backend support for 128-bit memory operations.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16 16:30:25 -07:00
Richard Henderson
7b88010719 tcg: Introduce tcg_target_has_memory_bswap
Replace the unparameterized TCG_TARGET_HAS_MEMORY_BSWAP macro
with a function with a memop argument.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16 15:21:39 -07:00
Richard Henderson
933b331b30 tcg/riscv: Support softmmu unaligned accesses
The system is required to emulate unaligned accesses, even if the
hardware does not support it.  The resulting trap may or may not
be more efficient than the qemu slow path.  There are linux kernel
patches in flight to allow userspace to query hardware support;
we can re-evaluate whether to enable this by default after that.

In the meantime, softmmu now matches useronly, where we already
assumed that unaligned accesses are supported.

Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16 15:21:39 -07:00
Richard Henderson
9161e9ae2e tcg/riscv: Use full load/store helpers in user-only mode
Instead of using helper_unaligned_{ld,st}, use the full load/store helpers.
This will allow the fast path to increase alignment to implement atomicity
while not immediately raising an alignment exception.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16 15:21:39 -07:00
Richard Henderson
0cadc1eda1 tcg: Unify helper_{be,le}_{ld,st}*
With the current structure of cputlb.c, there is no difference
between the little-endian and big-endian entry points, aside
from the assert.  Unify the pairs of functions.

Hoist the qemu_{ld,st}_helpers arrays to tcg.c.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16 15:21:39 -07:00
Richard Henderson
f0f43534f7 tcg/riscv: Simplify constraints on qemu_ld/st
The softmmu tlb uses TCG_REG_TMP[0-2], not any of the normally available
registers.  Now that we handle overlap betwen inputs and helper arguments,
we can allow any allocatable reg.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-11 09:53:41 +01:00
Richard Henderson
61b6daafb4 tcg/riscv: Convert tcg_out_qemu_{ld,st}_slow_path
Use tcg_out_ld_helper_args, tcg_out_ld_helper_ret,
and tcg_out_st_helper_args.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-11 09:53:41 +01:00
Richard Henderson
001dddfe0e tcg/riscv: Introduce prepare_host_addr
Merge tcg_out_tlb_load, add_qemu_ldst_label, tcg_out_test_alignment,
and some code that lived in both tcg_out_qemu_ld and tcg_out_qemu_st
into one function that returns TCGReg and TCGLabelQemuLdst.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-11 09:53:41 +01:00
Richard Henderson
f7041977a6 tcg/riscv: Rationalize args to tcg_out_qemu_{ld,st}
Interpret the variable argument placement in the caller.  Pass data_type
instead of is64 -- there are several places where we already convert back
from bool to type.  Clean things up by using type throughout.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-05 17:21:03 +01:00
Richard Henderson
aeb6326ec5 tcg/riscv: Require TCG_TARGET_REG_BITS == 64
The port currently does not support "oversize" guests, which
means riscv32 can only target 32-bit guests.  We will soon be
building TCG once for all guests.  This implies that we can
only support riscv64.

Since all Linux distributions target riscv64 not riscv32,
this is not much of a restriction and simplifies the code.

The brcond2 and setcond2 opcodes are exclusive to 32-bit hosts,
so we can and should remove the stubs.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-05 17:21:03 +01:00
Richard Henderson
3ea9be3340 tcg/riscv: Conditionalize tcg_out_exts_i32_i64
Since TCG_TYPE_I32 values are kept sign-extended in registers, via "w"
instructions, we don't need to extend if the register matches.
This is already relied upon by comparisons.

Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23 08:46:45 +01:00
Richard Henderson
767c250310 tcg: Introduce tcg_out_xchg
We will want a backend interface for register swapping.
This is only properly defined for x86; all others get a
stub version that always indicates failure.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23 08:46:45 +01:00
Richard Henderson
b3dfd5fc18 tcg: Introduce tcg_out_movext
This is common code in most qemu_{ld,st} slow paths, extending the
input value for the store helper data argument or extending the
return value from the load helper.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23 08:46:45 +01:00
Richard Henderson
b8b94ac675 tcg: Split out tcg_out_extrl_i64_i32
We will need a backend interface for type truncation.  For those backends
that did not enable TCG_TARGET_HAS_extrl_i64_i32, use tcg_out_mov.
Use it in tcg_reg_alloc_op in the meantime.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23 08:46:45 +01:00
Richard Henderson
b9bfe000f9 tcg: Split out tcg_out_extu_i32_i64
We will need a backend interface for type extension with zero.
Use it in tcg_reg_alloc_op in the meantime.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23 08:46:45 +01:00
Richard Henderson
9c6aa274a4 tcg: Split out tcg_out_exts_i32_i64
We will need a backend interface for type extension with sign.
Use it in tcg_reg_alloc_op in the meantime.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23 08:24:07 +01:00
Richard Henderson
9ecf5f61b8 tcg: Split out tcg_out_ext32u
We will need a backend interface for performing 32-bit zero-extend.
Use it in tcg_reg_alloc_op in the meantime.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23 08:23:59 +01:00
Richard Henderson
52bf3398c3 tcg: Split out tcg_out_ext32s
We will need a backend interface for performing 32-bit sign-extend.
Use it in tcg_reg_alloc_op in the meantime.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23 08:23:49 +01:00
Richard Henderson
379afdff47 tcg: Split out tcg_out_ext16u
We will need a backend interface for performing 16-bit zero-extend.
Use it in tcg_reg_alloc_op in the meantime.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23 08:21:30 +01:00
Richard Henderson
753e42eada tcg: Split out tcg_out_ext16s
We will need a backend interface for performing 16-bit sign-extend.
Use it in tcg_reg_alloc_op in the meantime.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23 08:21:19 +01:00
Richard Henderson
d0e66c897f tcg: Split out tcg_out_ext8u
We will need a backend interface for performing 8-bit zero-extend.
Use it in tcg_reg_alloc_op in the meantime.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23 08:18:04 +01:00
Richard Henderson
678155b2c5 tcg: Split out tcg_out_ext8s
We will need a backend interface for performing 8-bit sign-extend.
Use it in tcg_reg_alloc_op in the meantime.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23 08:17:49 +01:00
Richard Henderson
5427a9a760 tcg: Add TCG_TARGET_CALL_{RET,ARG}_I128
Fill in the parameters for the host ABI for Int128 for
those backends which require no extra modification.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-02-04 06:19:42 -10:00
Richard Henderson
5e3d0c199f tcg: Introduce tcg_target_call_oarg_reg
Replace the flat array tcg_target_call_oarg_regs[] with
a function call including the TCGCallReturnKind.

Extend the set of registers for ARM to r0-r3 to match the ABI:
https://github.com/ARM-software/abi-aa/blob/main/aapcs32/aapcs32.rst#result-return

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-02-04 06:19:42 -10:00
Richard Henderson
6a6d772e30 tcg: Introduce tcg_out_addi_ptr
Implement the function for arm, i386, and s390x, which will use it.
Add stubs for all other backends.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-02-04 06:19:42 -10:00
Richard Henderson
9d9db41373 tcg/riscv: Use tcg_pcrel_diff in tcg_out_ldst
We failed to update this with the w^x split, so misses the fact
that true pc-relative offsets are usually small.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20230117230415.354239-1-richard.henderson@linaro.org>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2023-01-20 10:14:14 +10:00
Richard Henderson
493c9b19a7 tcg/riscv: Implement direct branch for goto_tb
Now that tcg can handle direct and indirect goto_tb simultaneously,
we can optimistically leave space for a direct branch and fall back
to loading the pointer from the TB for an indirect branch.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-01-17 22:36:17 +00:00
Richard Henderson
9ae958e4d7 tcg/riscv: Introduce OPC_NOP
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-01-17 10:44:23 -10:00
Richard Henderson
2fd2e78d1b tcg: Remove TCG_TARGET_HAS_direct_jump
We now have the option to generate direct or indirect
goto_tb depending on the dynamic displacement, thus
the define is no longer necessary or completely accurate.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-01-17 10:25:49 -10:00