qemu/tcg/aarch64
Richard Henderson 238f43809a tcg: Widen CPUTLBEntry comparators to 64-bits
This makes CPUTLBEntry agnostic to the address size of the guest.
When 32-bit addresses are in effect, we can simply read the low
32 bits of the 64-bit field.  Similarly when we need to update
the field for setting TLB_NOTDIRTY.

For TCG backends that could in theory be big-endian, but in
practice are not (arm, loongarch, riscv), use QEMU_BUILD_BUG_ON
to document and ensure this is not accidentally missed.

For s390x, which is always big-endian, use HOST_BIG_ENDIAN anyway,
to document the reason for the adjustment.

For sparc64 and ppc64, always perform a 64-bit load, and rely on
the following 32-bit comparison to ignore the high bits.

Rearrange mips and ppc if ladders for clarity.

Reviewed-by: Anton Johansson <anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-05 12:04:28 -07:00
..
tcg-target-con-set.h tcg/aarch64: Support 128-bit load/store 2023-05-30 09:51:11 -07:00
tcg-target-con-str.h tcg/aarch64: Simplify constraints on qemu_ld/st 2023-05-30 09:51:11 -07:00
tcg-target.c.inc tcg: Widen CPUTLBEntry comparators to 64-bits 2023-06-05 12:04:28 -07:00
tcg-target.h tcg: Remove TCG_TARGET_TLB_DISPLACEMENT_BITS 2023-05-30 09:51:51 -07:00
tcg-target.opc.h tcg/aarch64: Implement INDEX_op_rotl{i,v}_vec 2020-06-02 08:42:37 -07:00