Commit Graph

400 Commits

Author SHA1 Message Date
Richard Henderson
95bf306e3a tcg/i386: Implement negsetcond_*
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24 11:22:42 -07:00
Richard Henderson
e91f015b62 tcg/i386: Use shift in tcg_out_setcond
For LT/GE vs zero, shift down the sign bit.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24 11:22:42 -07:00
Richard Henderson
96658acafd tcg/i386: Clear dest first in tcg_out_setcond if possible
Using XOR first is both smaller and more efficient,
though cannot be applied if it clobbers an input.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24 11:22:42 -07:00
Richard Henderson
6950f68b62 tcg/i386: Use CMP+SBB in tcg_out_setcond
Use the carry bit to optimize some forms of setcond.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24 11:22:42 -07:00
Richard Henderson
78ddf0dc75 tcg/i386: Merge tcg_out_movcond{32,64}
Pass a rexw parameter instead of duplicating the functions.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24 11:22:42 -07:00
Richard Henderson
7ba99a1c76 tcg/i386: Merge tcg_out_setcond{32,64}
Pass a rexw parameter instead of duplicating the functions.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24 11:22:42 -07:00
Richard Henderson
c359ce756d tcg/i386: Merge tcg_out_brcond{32,64}
Pass a rexw parameter instead of duplicating the functions.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24 11:22:42 -07:00
Richard Henderson
3635502dd0 tcg: Introduce negsetcond opcodes
Introduce a new opcode for negative setcond.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24 11:22:42 -07:00
Richard Henderson
13d885b0ad tcg: Unify TCG_TARGET_HAS_extr[lh]_i64_i32
Replace the separate defines with TCG_TARGET_HAS_extr_i64_i32,
so that the two parts of backend-specific type changing cannot
be out of sync.

Reported-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: <20230822175127.1173698-1-richard.henderson@linaro.org>
2023-08-24 11:22:42 -07:00
Richard Henderson
73f97f0aa3 tcg/i386: Allow immediate as input to deposit_*
We can use MOVB and MOVW with an immediate just as easily
as with a register input.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24 11:22:42 -07:00
Richard Henderson
36df88c040 tcg/i386: Drop BYTEH deposits for 64-bit
It is more useful to allow low-part deposits into all registers
than to restrict allocation for high-byte deposits.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24 11:22:42 -07:00
Richard Henderson
d3b41127c2 tcg/i386: Output %gs prefix in tcg_out_vex_opc
Missing the segment prefix means that user-only fails
to add guest_base for some 128-bit load/store.

Fixes: 098d0fc10d ("tcg/i386: Support 128-bit load/store")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1763
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-12 08:51:12 -07:00
Ilya Leoshkevich
22d2e5351a tcg/{i386, s390x}: Add earlyclobber to the op_add2's first output
i386 and s390x implementations of op_add2 require an earlyclobber,
which is currently missing. This breaks VCKSM in s390x guests. E.g., on
x86_64 the following op:

    add2_i32 tmp2,tmp3,tmp2,tmp3,tmp3,tmp2   dead: 0 2 3 4 5  pref=none,0xffff

is translated to:

    addl     %ebx, %r12d
    adcl     %r12d, %ebx

Introduce a new C_N1_O1_I4 constraint, and make sure that earlyclobber
of aliased outputs is honored.

Cc: qemu-stable@nongnu.org
Fixes: 82790a8709 ("tcg: Add markup for output requires new register")
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230719221310.1968845-7-iii@linux.ibm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-07-23 17:58:19 +01:00
Richard Henderson
d46259c037 tcg: Split out tcg-target-reg-bits.h
Often, the only thing we need to know about the TCG host
is the register size.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-05 12:04:28 -07:00
Richard Henderson
d0a9bb5ecb tcg: Add tlb_fast_offset to TCGContext
Disconnect the layout of ArchCPU from TCG compilation.
Pass the relative offset of 'env' and 'neg.tlb.f' as a parameter.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-05 12:04:28 -07:00
Richard Henderson
194339461b tcg: Remove TCG_TARGET_TLB_DISPLACEMENT_BITS
The last use was removed by e77c89fb08.

Fixes: e77c89fb08 ("cputlb: Remove static tlb sizing")
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-30 09:51:51 -07:00
Richard Henderson
098d0fc10d tcg/i386: Support 128-bit load/store
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-30 09:51:11 -07:00
Richard Henderson
dbedadbaad tcg/i386: Use host/cpuinfo.h
Use the CPUINFO_* bits instead of the individual boolean
variables that we had been using.  Remove all of the init
code that was moved over to cpuinfo-i386.c.

Note that have_avx512* check both AVX512{F,VL}, as we had
previously done during tcg_target_init.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-23 16:49:33 -07:00
Richard Henderson
a66efde188 tcg: Add tlb_dyn_max_bits to TCGContext
Disconnect guest tlb parameters from TCG compilation.

Reviewed-by: Anton Johansson <anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16 20:13:51 -07:00
Richard Henderson
aece72b76b tcg: Add page_bits and page_mask to TCGContext
Disconnect guest page size from TCG compilation.
While this could be done via exec/target_page.h, we want to cache
the value across multiple memory access operations, so we might
as well initialize this early.

The changes within tcg/ are entirely mechanical:

    sed -i s/TARGET_PAGE_BITS/s->page_bits/g
    sed -i s/TARGET_PAGE_MASK/s->page_mask/g

Reviewed-by: Anton Johansson <anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16 20:13:51 -07:00
Richard Henderson
63f4da91f9 tcg/i386: Remove TARGET_LONG_BITS, TCG_TYPE_TL
All uses can be infered from the INDEX_op_qemu_*_a{32,64}_* opcode
being used.  Add a field into TCGLabelQemuLdst to record the usage.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16 20:13:51 -07:00
Richard Henderson
c60ad6e3b9 tcg/i386: Adjust type of tlb_mask
Because of its use on tgen_arithi, this value must be a signed
32-bit quantity, as that is what may be encoded in the insn.
The truncation of the value to unsigned for 32-bit guests is
done via the REX bit via 'trexw'.

Removes the only uses of target_ulong from this tcg backend.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16 20:13:51 -07:00
Richard Henderson
b2485530d8 tcg/i386: Conditionalize tcg_out_extu_i32_i64
Since TCG_TYPE_I32 values are kept zero-extended in registers, via
omission of the REXW bit, we need not extend if the register matches.
This is already relied upon by qemu_{ld,st}.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16 20:13:51 -07:00
Richard Henderson
7a9ccb869c tcg/i386: Always enable TCG_TARGET_HAS_extr[lh]_i64_i32
Keep all 32-bit values zero extended in the register, not solely when
addresses are 32 bits.  This eliminates a dependency on TARGET_LONG_BITS.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16 20:13:51 -07:00
Richard Henderson
fecccfcc54 tcg: Split INDEX_op_qemu_{ld,st}* for guest address size
For 32-bit hosts, we cannot simply rely on TCGContext.addr_bits,
as we need one or two host registers to represent the guest address.

Create the new opcodes and update all users.  Since we have not
yet eliminated TARGET_LONG_BITS, only one of the two opcodes will
ever be used, so we can get away with treating them the same in
the backends.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16 20:07:20 -07:00
Richard Henderson
1c5322d90c tcg/i386: Use atom_and_align_for_opc
No change to the ultimate load/store routines yet, so some atomicity
conditions not yet honored, but plumbs the change to alignment through
the relevant functions.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16 16:30:29 -07:00
Richard Henderson
12fde9bcdb tcg: Add INDEX_op_qemu_{ld,st}_i128
Add opcodes for backend support for 128-bit memory operations.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16 16:30:25 -07:00
Richard Henderson
7b88010719 tcg: Introduce tcg_target_has_memory_bswap
Replace the unparameterized TCG_TARGET_HAS_MEMORY_BSWAP macro
with a function with a memop argument.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16 15:21:39 -07:00
Richard Henderson
30cc7a7e91 tcg/i386: Use full load/store helpers in user-only mode
Instead of using helper_unaligned_{ld,st}, use the full load/store helpers.
This will allow the fast path to increase alignment to implement atomicity
while not immediately raising an alignment exception.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16 15:21:39 -07:00
Richard Henderson
6d3f2e3c64 tcg/i386: Add have_atomic16
Notice when Intel or AMD have guaranteed that vmovdqa is atomic.
The new variable will also be used in generated code.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16 15:21:39 -07:00
Richard Henderson
0cadc1eda1 tcg: Unify helper_{be,le}_{ld,st}*
With the current structure of cputlb.c, there is no difference
between the little-endian and big-endian entry points, aside
from the assert.  Unify the pairs of functions.

Hoist the qemu_{ld,st}_helpers arrays to tcg.c.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16 15:21:39 -07:00
Richard Henderson
988998503b tcg/i386: Set P_REXW in tcg_out_addi_ptr
The REXW bit must be set to produce a 64-bit pointer result; the
bit is disabled in 32-bit mode, so we can do this unconditionally.

Fixes: 7d9e1ee424 ("tcg/i386: Adjust assert in tcg_out_addi_ptr")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1592
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1642
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16 15:21:38 -07:00
Richard Henderson
0036e54e7a tcg/i386: Convert tcg_out_qemu_st_slow_path
Use tcg_out_st_helper_args.  This eliminates the use of a tail call to
the store helper.  This may or may not be an improvement, depending on
the call/return branch prediction of the host microarchitecture.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-11 09:53:41 +01:00
Richard Henderson
da8ab70ad1 tcg/i386: Convert tcg_out_qemu_ld_slow_path
Use tcg_out_ld_helper_args and tcg_out_ld_helper_ret.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-11 09:53:41 +01:00
Richard Henderson
1fac4648fe tcg/i386: Use indexed addressing for softmmu fast path
Since tcg_out_{ld,st}_helper_args, the slow path no longer requires
the address argument to be set up by the tlb load sequence.  Use a
plain load for the addend and indexed addressing with the original
input address register.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-11 09:53:41 +01:00
Richard Henderson
530074c6c1 tcg/i386: Introduce prepare_host_addr
Merge tcg_out_tlb_load, add_qemu_ldst_label,
tcg_out_test_alignment, and some code that lived in both
tcg_out_qemu_ld and tcg_out_qemu_st into one function
that returns HostAddress and TCGLabelQemuLdst structures.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-11 09:53:41 +01:00
Richard Henderson
a48f1c7415 tcg/i386: Introduce tcg_out_testi
Split out a helper for choosing testb vs testl.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-05 17:10:59 +01:00
Richard Henderson
3c2c35e23e tcg/i386: Drop r0+r1 local variables from tcg_out_tlb_load
Use TCG_REG_L[01] constants directly.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-05 17:10:59 +01:00
Richard Henderson
61713c29a9 tcg/i386: Introduce HostAddress
Collect the 4 potential parts of the host address into a struct.
Reorg tcg_out_qemu_{ld,st}_direct to use it.
Reorg guest_base handling to use it.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-05 17:10:59 +01:00
Richard Henderson
3174941fe0 tcg/i386: Generalize multi-part load overlap test
Test for both base and index; use datahi as a temporary, overwritten
by the final load.  Always perform the loads in ascending order, so
that any (user-only) fault sees the correct address.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-05 17:10:59 +01:00
Richard Henderson
bf12e2240d tcg/i386: Rationalize args to tcg_out_qemu_{ld,st}
Interpret the variable argument placement in the caller.  Pass data_type
instead of is64 -- there are several places where we already convert back
from bool to type.  Clean things up by using type throughout.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-05 17:10:59 +01:00
Richard Henderson
129f1f9ee7 tcg: Introduce tcg_out_movext2
This is common code in most qemu_{ld,st} slow paths, moving two
registers when there may be overlap between sources and destinations.
At present, this is only used by 32-bit hosts for 64-bit data,
but will shortly be used for more than that.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-02 13:05:45 -07:00
Richard Henderson
767c250310 tcg: Introduce tcg_out_xchg
We will want a backend interface for register swapping.
This is only properly defined for x86; all others get a
stub version that always indicates failure.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23 08:46:45 +01:00
Richard Henderson
b3dfd5fc18 tcg: Introduce tcg_out_movext
This is common code in most qemu_{ld,st} slow paths, extending the
input value for the store helper data argument or extending the
return value from the load helper.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23 08:46:45 +01:00
Richard Henderson
b8b94ac675 tcg: Split out tcg_out_extrl_i64_i32
We will need a backend interface for type truncation.  For those backends
that did not enable TCG_TARGET_HAS_extrl_i64_i32, use tcg_out_mov.
Use it in tcg_reg_alloc_op in the meantime.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23 08:46:45 +01:00
Richard Henderson
b9bfe000f9 tcg: Split out tcg_out_extu_i32_i64
We will need a backend interface for type extension with zero.
Use it in tcg_reg_alloc_op in the meantime.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23 08:46:45 +01:00
Richard Henderson
9c6aa274a4 tcg: Split out tcg_out_exts_i32_i64
We will need a backend interface for type extension with sign.
Use it in tcg_reg_alloc_op in the meantime.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23 08:24:07 +01:00
Richard Henderson
9ecf5f61b8 tcg: Split out tcg_out_ext32u
We will need a backend interface for performing 32-bit zero-extend.
Use it in tcg_reg_alloc_op in the meantime.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23 08:23:59 +01:00
Richard Henderson
52bf3398c3 tcg: Split out tcg_out_ext32s
We will need a backend interface for performing 32-bit sign-extend.
Use it in tcg_reg_alloc_op in the meantime.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23 08:23:49 +01:00
Richard Henderson
379afdff47 tcg: Split out tcg_out_ext16u
We will need a backend interface for performing 16-bit zero-extend.
Use it in tcg_reg_alloc_op in the meantime.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23 08:21:30 +01:00