Commit Graph

2505 Commits

Author SHA1 Message Date
Peter Maydell
ad768e6f2a include: Move qemu_[id]cache_* declarations to new qemu/cacheinfo.h
The qemu_icache_linesize, qemu_icache_linesize_log,
qemu_dcache_linesize, and qemu_dcache_linesize_log variables are not
used in many files.  Move them out of osdep.h to a new
qemu/cacheinfo.h, and document them.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220208200856.3558249-5-peter.maydell@linaro.org
2022-02-21 13:30:20 +00:00
Peter Maydell
f2241d16ea include: Move qemu_mprotect_*() to new qemu/mprotect.h
The qemu_mprotect_*() family of functions are used in very few files;
move them from osdep.h to a new qemu/mprotect.h.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220208200856.3558249-3-peter.maydell@linaro.org
2022-02-21 13:30:20 +00:00
Peter Maydell
b85ea5fa2f include: Move qemu_madvise() and related #defines to new qemu/madvise.h
The function qemu_madvise() and the QEMU_MADV_* constants associated
with it are used in only 10 files.  Move them out of osdep.h to a new
qemu/madvise.h header that is included where it is needed.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220208200856.3558249-2-peter.maydell@linaro.org
2022-02-21 13:30:20 +00:00
Peter Maydell
50a75ff680 Fix safe_syscall_base for sparc64.
Fix host signal handling for sparc64-linux.
 Speedups for jump cache and work list probing.
 Fix for exception replays.
 Raise guest SIGBUS for user-only misaligned accesses.
 -----BEGIN PGP SIGNATURE-----
 
 iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmIFu3QdHHJpY2hhcmQu
 aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV9tHwf+KFk9Pa9M+vhnHTZJ
 ZvIRs9BaSzoDYqxLlHSKmXN3w3G5PbIcbHVHXTty2o28bT0jk05T9zQn3TzMfcbl
 O+Yx8rygUJmbzlEQ+GaSI69pppFj8ahlS/ylfwd5MABZun2mawexEU9sqXqGCKR9
 kJY8IpkZ6vqEDONcS1ZMQ+HFsNvw6LYBd567SY8g9ZsyPLWtQSqwdcuPqAJDFWCv
 zNe6b07IRoFVOsbtQix9Dl/ntMxk5jto+UvdEVuW2FJOeRZJRshLWF5cGHNavSgQ
 Culb5ALOzoxSlcZ4xfVfWtBGoFr/BNu9D0omTSmbosvXAd4HmPVxD6kV17wXV3+g
 G/cvew==
 =D+x7
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/rth-gitlab/tags/pull-tcg-20220211' into staging

Fix safe_syscall_base for sparc64.
Fix host signal handling for sparc64-linux.
Speedups for jump cache and work list probing.
Fix for exception replays.
Raise guest SIGBUS for user-only misaligned accesses.

# gpg: Signature made Fri 11 Feb 2022 01:27:16 GMT
# gpg:                using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg:                issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full]
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A  05C0 64DF 38E8 AF7E 215F

* remotes/rth-gitlab/tags/pull-tcg-20220211: (34 commits)
  tests/tcg/multiarch: Add sigbus.c
  tcg/sparc: Support unaligned access for user-only
  tcg/sparc: Add tcg_out_jmpl_const for better tail calls
  tcg/sparc: Use the constant pool for 64-bit constants
  tcg/sparc: Convert patch_reloc to return bool
  tcg/sparc: Improve code gen for shifted 32-bit constants
  tcg/sparc: Add scratch argument to tcg_out_movi_int
  tcg/sparc: Split out tcg_out_movi_imm32
  tcg/sparc: Use tcg_out_movi_imm13 in tcg_out_addsub2_i64
  tcg/mips: Support unaligned access for softmmu
  tcg/mips: Support unaligned access for user-only
  tcg/arm: Support raising sigbus for user-only
  tcg/arm: Reserve a register for guest_base
  tcg/arm: Support unaligned access for softmmu
  tcg/arm: Check alignment for ldrd and strd
  tcg/arm: Remove use_armv6_instructions
  tcg/arm: Remove use_armv5t_instructions
  tcg/arm: Drop support for armv4 and armv5 hosts
  tcg/loongarch64: Support raising sigbus for user-only
  tcg/tci: Support raising sigbus for user-only
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-02-14 15:24:26 +00:00
Alex Bennée
c51e51005b tracing: remove TCG memory access tracing
If you really want to trace all memory operations TCG plugins gives
you a more flexible interface for doing so.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Cc: Luis Vilanova <vilanova@imperial.ac.uk>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20220204204335.1689602-19-alex.bennee@linaro.org>
2022-02-09 12:08:42 +00:00
Richard Henderson
321dbde33a tcg/sparc: Support unaligned access for user-only
This is kinda sorta the opposite of the other tcg hosts, where
we get (normal) alignment checks for free with host SIGBUS and
need to add code to support unaligned accesses.

This inline code expansion is somewhat large, but it takes quite
a few instructions to make a function call to a helper anyway.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-02-09 09:00:00 +11:00
Richard Henderson
e01d60f211 tcg/sparc: Add tcg_out_jmpl_const for better tail calls
Due to mapping changes, we now rarely place the code_gen_buffer
near the main executable.  Which means that direct calls will
now rarely be in range.

So, always use indirect calls for tail calls, which allows us to
avoid clobbering %o7, and therefore we need not save and restore it.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-02-09 09:00:00 +11:00
Richard Henderson
c834b8d81b tcg/sparc: Use the constant pool for 64-bit constants
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-02-09 09:00:00 +11:00
Richard Henderson
6a6bfa3c60 tcg/sparc: Convert patch_reloc to return bool
Since 7ecd02a06f, if patch_reloc fails we restart translation
with a smaller TB.  SPARC had its function signature changed,
but not the logic.  Replace assert with return false.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-02-09 08:59:06 +11:00
Richard Henderson
684db2a0b0 tcg/sparc: Improve code gen for shifted 32-bit constants
We had code for checking for 13 and 21-bit shifted constants,
but we can do better and allow 32-bit shifted constants.
This is still 2 insns shorter than the full 64-bit sequence.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-02-09 08:59:06 +11:00
Richard Henderson
92840d06fa tcg/sparc: Add scratch argument to tcg_out_movi_int
This will allow us to control exactly what scratch register is
used for loading the constant.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-02-09 08:58:44 +11:00
Richard Henderson
c71929c345 tcg/sparc: Split out tcg_out_movi_imm32
Handle 32-bit constants with a separate function, so that
tcg_out_movi_int does not need to recurse.  This slightly
rearranges the order of tests for small constants, but
produces the same output.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-02-09 08:57:42 +11:00
Richard Henderson
414399b6b8 tcg/sparc: Use tcg_out_movi_imm13 in tcg_out_addsub2_i64
When BH is constant, it is constrained to 11 bits for use in MOVCC.
For the cases in which we must load the constant BH into a register,
we do not need the full logic of tcg_out_movi; we can use the simpler
function for emitting a 13 bit constant.

This eliminates the only case in which TCG_REG_T2 was passed to
tcg_out_movi, which will shortly become invalid.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-02-09 08:56:50 +11:00
Richard Henderson
d9e5283465 tcg/mips: Support unaligned access for softmmu
We can use the routines just added for user-only to emit
unaligned accesses in softmmu mode too.

Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-02-09 08:55:02 +11:00
Richard Henderson
23a79c113e tcg/mips: Support unaligned access for user-only
This is kinda sorta the opposite of the other tcg hosts, where
we get (normal) alignment checks for free with host SIGBUS and
need to add code to support unaligned accesses.

Fortunately, the ISA contains pairs of instructions that are
used to implement unaligned memory accesses.  Use them.

Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-02-09 08:55:02 +11:00
Richard Henderson
0c90fa5dce tcg/arm: Support raising sigbus for user-only
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-02-09 08:55:02 +11:00
Richard Henderson
4bb802073f tcg/arm: Reserve a register for guest_base
Reserve a register for the guest_base using aarch64 for reference.
By doing so, we do not have to recompute it for every memory load.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-02-09 08:55:02 +11:00
Richard Henderson
8821ec2323 tcg/arm: Support unaligned access for softmmu
From armv6, the architecture supports unaligned accesses.
All we need to do is perform the correct alignment check
in tcg_out_tlb_read.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-02-09 08:55:02 +11:00
Richard Henderson
367d43d85b tcg/arm: Check alignment for ldrd and strd
We will shortly allow the use of unaligned memory accesses,
and these require proper alignment.  Use get_alignment_bits
to verify and remove USING_SOFTMMU.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-02-09 08:55:02 +11:00
Richard Henderson
bde2cdb59b tcg/arm: Remove use_armv6_instructions
This is now always true, since we require armv6.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-02-09 08:55:02 +11:00
Richard Henderson
6cef13940c tcg/arm: Remove use_armv5t_instructions
This is now always true, since we require armv6.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-02-09 08:55:02 +11:00
Richard Henderson
01dfc0ed7f tcg/arm: Drop support for armv4 and armv5 hosts
Support for unaligned accesses is difficult for pre-v6 hosts.
While debian still builds for armv4, we cannot use a compile
time test, so test the architecture at runtime and error out.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-02-09 08:55:02 +11:00
WANG Xuerui
6f78c7b082 tcg/loongarch64: Support raising sigbus for user-only
Signed-off-by: WANG Xuerui <git@xen0n.name>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220106134238.3936163-1-git@xen0n.name>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-02-09 08:55:02 +11:00
Richard Henderson
fe1bee3a0a tcg/tci: Support raising sigbus for user-only
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-02-09 08:55:02 +11:00
Richard Henderson
1cd49868d4 tcg/s390x: Support raising sigbus for user-only
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-02-09 08:55:02 +11:00
Richard Henderson
a3fb7c99c0 tcg/riscv: Support raising sigbus for user-only
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-02-09 08:55:02 +11:00
Richard Henderson
8605cbcdee tcg/ppc: Support raising sigbus for user-only
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-02-09 08:55:02 +11:00
Richard Henderson
f85ab3d2e5 tcg/aarch64: Support raising sigbus for user-only
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-02-09 08:55:02 +11:00
Richard Henderson
b1ee3c6725 tcg/i386: Support raising sigbus for user-only
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-02-09 08:55:02 +11:00
WANG Xuerui
7b17a47540 tcg/loongarch64: Fix fallout from recent MO_Q renaming
Apparently we were left behind; just renaming MO_Q to MO_UQ is enough.

Fixes: fc313c6434 ("exec/memop: Adding signedness to quad definitions")
Signed-off-by: WANG Xuerui <git@xen0n.name>
Message-Id: <20220206162106.1092364-1-i.qemu@xen0n.name>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-02-09 08:55:02 +11:00
Michael S. Tsirkin
2a728de1ff cpuid: use unsigned for max cpuid
__get_cpuid_max returns an unsigned value.
For consistency, store the result in an unsigned variable.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
2022-02-04 09:07:43 -05:00
Frédéric Pétrot
fc313c6434 exec/memop: Adding signedness to quad definitions
Renaming defines for quad in their various forms so that their signedness is
now explicit.
Done using git grep as suggested by Philippe, with a bit of hand edition to
keep assignments aligned.

Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20220106210108.138226-2-frederic.petrot@univ-grenoble-alpes.fr
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2022-01-08 15:46:10 +10:00
Richard Henderson
c578ff1858 tcg/optimize: Fix folding of vector ops
Bitwise operations are easy to fold, because the operation is
identical regardless of element size.  But add and sub need
extra element size info that is not currently propagated.

Fixes: 2f9f08ba43
Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/799
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-01-04 15:14:42 -08:00
WANG Xuerui
a9ae47486a tcg/loongarch64: Register the JIT
Signed-off-by: WANG Xuerui <git@xen0n.name>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211221054105.178795-28-git@xen0n.name>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-12-21 13:17:06 -08:00
WANG Xuerui
8df89cf0ae tcg/loongarch64: Implement tcg_target_init
Signed-off-by: WANG Xuerui <git@xen0n.name>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211221054105.178795-27-git@xen0n.name>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-12-21 13:17:06 -08:00
WANG Xuerui
30d420e4d3 tcg/loongarch64: Implement exit_tb/goto_tb
Signed-off-by: WANG Xuerui <git@xen0n.name>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211221054105.178795-26-git@xen0n.name>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-12-21 13:17:06 -08:00
WANG Xuerui
697a598059 tcg/loongarch64: Implement tcg_target_qemu_prologue
Signed-off-by: WANG Xuerui <git@xen0n.name>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211221054105.178795-25-git@xen0n.name>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-12-21 13:17:06 -08:00
WANG Xuerui
d3a1727c19 tcg/loongarch64: Add softmmu load/store helpers, implement qemu_ld/qemu_st ops
Signed-off-by: WANG Xuerui <git@xen0n.name>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211221054105.178795-24-git@xen0n.name>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-12-21 13:17:06 -08:00
WANG Xuerui
251ebcd812 tcg/loongarch64: Implement simple load/store ops
Signed-off-by: WANG Xuerui <git@xen0n.name>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20211221054105.178795-23-git@xen0n.name>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-12-21 13:17:06 -08:00
WANG Xuerui
a26d99d72f tcg/loongarch64: Implement tcg_out_call
Signed-off-by: WANG Xuerui <git@xen0n.name>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211221054105.178795-22-git@xen0n.name>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-12-21 13:17:06 -08:00
WANG Xuerui
9ee775cf29 tcg/loongarch64: Implement setcond ops
Signed-off-by: WANG Xuerui <git@xen0n.name>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211221054105.178795-21-git@xen0n.name>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-12-21 13:17:06 -08:00
WANG Xuerui
94505c02f4 tcg/loongarch64: Implement br/brcond ops
Signed-off-by: WANG Xuerui <git@xen0n.name>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20211221054105.178795-20-git@xen0n.name>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-12-21 13:17:06 -08:00
WANG Xuerui
ff13c19689 tcg/loongarch64: Implement mul/mulsh/muluh/div/divu/rem/remu ops
Signed-off-by: WANG Xuerui <git@xen0n.name>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20211221054105.178795-19-git@xen0n.name>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-12-21 13:17:06 -08:00
WANG Xuerui
39f54ce5c4 tcg/loongarch64: Implement add/sub ops
The neg_i{32,64} ops is fully expressible with sub, so omitted for
simplicity.

Signed-off-by: WANG Xuerui <git@xen0n.name>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20211221054105.178795-18-git@xen0n.name>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-12-21 13:17:06 -08:00
WANG Xuerui
a164010b05 tcg/loongarch64: Implement shl/shr/sar/rotl/rotr ops
Signed-off-by: WANG Xuerui <git@xen0n.name>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20211221054105.178795-17-git@xen0n.name>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-12-21 13:17:06 -08:00
WANG Xuerui
fde6930160 tcg/loongarch64: Implement clz/ctz ops
Signed-off-by: WANG Xuerui <git@xen0n.name>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211221054105.178795-16-git@xen0n.name>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-12-21 13:17:06 -08:00
WANG Xuerui
4ab2aff0db tcg/loongarch64: Implement bswap{16,32,64} ops
Signed-off-by: WANG Xuerui <git@xen0n.name>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20211221054105.178795-15-git@xen0n.name>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-12-21 13:17:06 -08:00
WANG Xuerui
7257809f62 tcg/loongarch64: Implement deposit/extract ops
Signed-off-by: WANG Xuerui <git@xen0n.name>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20211221054105.178795-14-git@xen0n.name>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-12-21 13:17:06 -08:00
WANG Xuerui
97b2fafbf7 tcg/loongarch64: Implement not/and/or/xor/nor/andc/orc ops
Signed-off-by: WANG Xuerui <git@xen0n.name>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20211221054105.178795-13-git@xen0n.name>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-12-21 13:17:06 -08:00
WANG Xuerui
6be08fcfc3 tcg/loongarch64: Implement sign-/zero-extension ops
Signed-off-by: WANG Xuerui <git@xen0n.name>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20211221054105.178795-12-git@xen0n.name>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-12-21 13:17:06 -08:00
WANG Xuerui
e3b15766b9 tcg/loongarch64: Implement goto_ptr
Signed-off-by: WANG Xuerui <git@xen0n.name>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20211221054105.178795-11-git@xen0n.name>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-12-21 13:17:06 -08:00
WANG Xuerui
dacc51720d tcg/loongarch64: Implement tcg_out_mov and tcg_out_movi
Signed-off-by: WANG Xuerui <git@xen0n.name>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20211221054105.178795-10-git@xen0n.name>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-12-21 13:17:06 -08:00
WANG Xuerui
fae2361dc9 tcg/loongarch64: Implement the memory barrier op
Signed-off-by: WANG Xuerui <git@xen0n.name>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20211221054105.178795-9-git@xen0n.name>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-12-21 13:17:06 -08:00
WANG Xuerui
bf8c1c8140 tcg/loongarch64: Implement necessary relocation operations
Signed-off-by: WANG Xuerui <git@xen0n.name>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20211221054105.178795-8-git@xen0n.name>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-12-21 13:17:06 -08:00
WANG Xuerui
ba0cdd8040 tcg/loongarch64: Define the operand constraints
Signed-off-by: WANG Xuerui <git@xen0n.name>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20211221054105.178795-7-git@xen0n.name>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-12-21 13:17:06 -08:00
WANG Xuerui
1bcfbf03df tcg/loongarch64: Add register names, allocation order and input/output sets
Signed-off-by: WANG Xuerui <git@xen0n.name>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211221054105.178795-6-git@xen0n.name>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-12-21 13:17:06 -08:00
WANG Xuerui
71bb0283f5 tcg/loongarch64: Add generated instruction opcodes and encoding helpers
Signed-off-by: WANG Xuerui <git@xen0n.name>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211221054105.178795-5-git@xen0n.name>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-12-21 13:17:06 -08:00
WANG Xuerui
6cb14e4de2 tcg/loongarch64: Add the tcg-target.h file
Support for all optional TCG ops are initially marked disabled; the bits
are to be set in individual commits later.

Signed-off-by: WANG Xuerui <git@xen0n.name>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20211221054105.178795-4-git@xen0n.name>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-12-21 13:17:06 -08:00
Richard Henderson
b9537d5904 tcg/arm: Reduce vector alignment requirement for NEON
With arm32, the ABI gives us 8-byte alignment for the stack.
While it's possible to realign the stack to provide 16-byte alignment,
it's far easier to simply not encode 16-byte alignment in the
VLD1 and VST1 instructions that we emit.

Remove the assertion in temp_allocate_frame, limit natural alignment
to the provided stack alignment, and add a comment.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1999878
Reported-by: Richard W.M. Jones <rjones@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210912174925.200132-1-richard.henderson@linaro.org>
Message-Id: <20211206191335.230683-2-richard.henderson@linaro.org>
2021-12-07 06:32:09 -08:00
Miroslav Rezanina
d58f01733b tcg/s390x: Fix tcg_out_vec_op argument type
Newly defined tcg_out_vec_op (34ef767609 tcg/s390x: Add host vector framework)
for s390x uses pointer argument definition.
This fails on gcc 11 as original declaration uses array argument:

In file included from ../tcg/tcg.c:430:
/builddir/build/BUILD/qemu-6.1.50/tcg/s390x/tcg-target.c.inc:2702:42: error: argument 5 of type 'const TCGArg *' {aka 'const long unsigned int *'} declared as a pointer [-Werror=array-parameter=]
 2702 |                            const TCGArg *args, const int *const_args)
      |                            ~~~~~~~~~~~~~~^~~~
../tcg/tcg.c:121:41: note: previously declared as an array 'const TCGArg[16]' {aka 'const long unsigned int[16]'}
  121 |                            const TCGArg args[TCG_MAX_OP_ARGS],
      |                            ~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~
In file included from ../tcg/tcg.c:430:
/builddir/build/BUILD/qemu-6.1.50/tcg/s390x/tcg-target.c.inc:2702:59: error: argument 6 of type 'const int *' declared as a pointer [-Werror=array-parameter=]
 2702 |                            const TCGArg *args, const int *const_args)
      |                                                ~~~~~~~~~~~^~~~~~~~~~
../tcg/tcg.c:122:38: note: previously declared as an array 'const int[16]'
  122 |                            const int const_args[TCG_MAX_OP_ARGS]);
      |                            ~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~

Fixing argument type to pass build.

Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Acked-by: David Hildenbrand <david@redhat.com>
Message-Id: <20211027085629.240704-1-mrezanin@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-11-11 11:47:58 +01:00
Richard Henderson
8d30f0473e tcg: Document ctpop opcodes
Fixes: a768e4e992
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/658
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-11-11 11:47:01 +01:00
Richard Henderson
225bec0c0e tcg/optimize: Add an extra cast to fold_extract2
There is no bug, but silence a warning about computation
in int32_t being assigned to a uint64_t.

Reported-by: Coverity CID 1465220
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-11-11 11:47:01 +01:00
Daniel P. Berrangé
b6a7f3e0d2 qapi: introduce x-query-opcount QMP command
This is a counterpart to the HMP "info opcount" command. It is being
added with an "x-" prefix because this QMP command is intended as an
ad hoc debugging tool and will thus not be modelled in QAPI as fully
structured data, nor will it have long term guaranteed stability.
The existing HMP command is rewritten to call the QMP command.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
2021-11-02 15:57:20 +00:00
Daniel P. Berrangé
3a841ab53f qapi: introduce x-query-jit QMP command
This is a counterpart to the HMP "info jit" command. It is being
added with an "x-" prefix because this QMP command is intended as an
ad hoc debugging tool and will thus not be modelled in QAPI as fully
structured data, nor will it have long term guaranteed stability.
The existing HMP command is rewritten to call the QMP command.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
2021-11-02 15:57:20 +00:00
Richard Henderson
93a967fbb5 tcg/optimize: Propagate sign info for shifting
For constant shifts, we can simply shift the s_mask.

For variable shifts, we know that sar does not reduce
the s_mask, which helps for sequences like

    ext32s_i64  t, in
    sar_i64     t, t, v
    ext32s_i64  out, t

allowing the final extend to be eliminated.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-28 20:55:07 -07:00
Richard Henderson
2b9d0c59ed tcg/optimize: Propagate sign info for bit counting
The results are generally 6 bit unsigned values, though
the count leading and trailing bits may produce any value
for a zero input.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-28 20:55:07 -07:00
Richard Henderson
275d7d8e70 tcg/optimize: Propagate sign info for setcond
The result is either 0 or 1, which means that we have
a 2 bit signed result, and thus 62 bits of sign.
For clarity, use the smask_from_zmask function.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-28 20:55:07 -07:00
Richard Henderson
3f2b1f8376 tcg/optimize: Propagate sign info for logical operations
Sign repetitions are perforce all identical, whether they are 1 or 0.
Bitwise operations preserve the relative quantity of the repetitions.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-28 20:55:07 -07:00
Richard Henderson
57fe5c6df2 tcg/optimize: Optimize sign extensions
Certain targets, like riscv, produce signed 32-bit results.
This can lead to lots of redundant extensions as values are
manipulated.

Begin by tracking only the obvious sign-extensions, and
converting them to simple copies when possible.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-28 20:55:07 -07:00
Richard Henderson
267c17e825 tcg/optimize: Use fold_xx_to_i for rem
Recognize the constant function for remainder.

Suggested-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-28 20:55:07 -07:00
Richard Henderson
2f9d9a3422 tcg/optimize: Use fold_xi_to_x for div
Recognize the identity function for division.

Suggested-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-28 20:55:07 -07:00
Richard Henderson
5b5cf47983 tcg/optimize: Use fold_xi_to_x for mul
Recognize the identity function for low-part multiply.

Suggested-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-28 20:55:07 -07:00
Richard Henderson
4e858d96aa tcg/optimize: Use fold_xx_to_i for orc
Recognize the constant function for or-complement.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-28 20:55:07 -07:00
Richard Henderson
faa2e10045 tcg/optimize: Stop forcing z_mask to "garbage" for 32-bit values
This "garbage" setting pre-dates the addition of the type
changing opcodes INDEX_op_ext_i32_i64, INDEX_op_extu_i32_i64,
and INDEX_op_extr{l,h}_i64_i32.

So now we have a definitive points at which to adjust z_mask
to eliminate such bits from the 32-bit operands.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-28 20:55:07 -07:00
Richard Henderson
18cf3d07a2 tcg: Extend call args using the correct opcodes
Pretending that the source is i64 when it is in fact i32 is
incorrect; we have type-changing opcodes that must be used.
This bug trips up the subsequent change to the optimizer.

Fixes: 4f2331e5b6
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-28 20:54:52 -07:00
Richard Henderson
7a2f708452 tcg/optimize: Sink commutative operand swapping into fold functions
Most of these are handled by creating a fold_const2_commutative
to handle all of the binary operators.  The rest were already
handled on a case-by-case basis in the switch, and have their
own fold function in which to place the call.

We now have only one major switch on TCGOpcode.

Introduce NO_DEST and a block comment for swap_commutative in
order to make the handling of brcond and movcond opcodes cleaner.

Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:23 -07:00
Richard Henderson
9531c078ff tcg/optimize: Expand fold_addsub2_i32 to 64-bit ops
Rename to fold_addsub2.
Use Int128 to implement the wider operation.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
407112b03d tcg/optimize: Expand fold_mulu2_i32 to all 4-arg multiplies
Rename to fold_multiply2, and handle muls2_i32, mulu2_i64,
and muls2_i64.

Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
fae450ba47 tcg/optimize: Split out fold_masks
Move all of the known-zero optimizations into the per-opcode
functions.  Use fold_masks when there is a possibility of the
result being determined, and simply set ctx->z_mask otherwise.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
da48e27202 tcg/optimize: Split out fold_ix_to_i
Pull the "op r, 0, b => movi r, 0" optimization into a function,
and use it in fold_shift.

Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
a63ce0e9cb tcg/optimize: Split out fold_xi_to_x
Pull the "op r, a, i => mov r, a" optimization into a function,
and use them in the outer-most logical operations.

Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
9caca88a76 tcg/optimize: Split out fold_sub_to_neg
Even though there is only one user, place this more complex
conversion into its own helper.

Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
0e0a32bacb tcg/optimize: Split out fold_to_not
Split out the conditional conversion from a more complex logical
operation to a simple NOT.  Create a couple more helpers to make
this easy for the outer-most logical operations.

Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
67f84c9621 tcg/optimize: Add type to OptContext
Compute the type of the operation early.

There are at least 4 places that used a def->flags ladder
to determine the type of the operation being optimized.

There were two places that assumed !TCG_OPF_64BIT means
TCG_TYPE_I32, and so could potentially compute incorrect
results for vector operations.

Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
e8679955ec tcg/optimize: Split out fold_xi_to_i
Pull the "op r, a, 0 => movi r, 0" optimization into a function,
and use it in the outer opcode fold functions.

Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
ca7bb049a0 tcg/optimize: Split out fold_xx_to_x
Pull the "op r, a, a => mov r, a" optimization into a function,
and use it in the outer opcode fold functions.

Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
cbe42fb2f2 tcg/optimize: Split out fold_xx_to_i
Pull the "op r, a, a => movi r, 0" optimization into a function,
and use it in the outer opcode fold functions.

Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
2cfac7fa48 tcg/optimize: Split out fold_mov
This is the final entry in the main switch that was in a
different form.  After this, we have the option to convert
the switch into a function dispatch table.

Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
8cdb3fcb8e tcg/optimize: Split out fold_dup, fold_dup2
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
09bacdc263 tcg/optimize: Split out fold_bswap
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
30dd0bfeb5 tcg/optimize: Split out fold_count_zeros
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
1b1907b846 tcg/optimize: Split out fold_deposit
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
b6617c8821 tcg/optimize: Split out fold_extract, fold_sextract
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
dcd08996c9 tcg/optimize: Split out fold_extract2
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
0c310a3005 tcg/optimize: Split out fold_movcond
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
e3f7dc2167 tcg/optimize: Split out fold_addsub2_i32
Add two additional helpers, fold_add2_i32 and fold_sub2_i32
which will not be simple wrappers forever.

Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
6b8ac0d149 tcg/optimize: Split out fold_mulu2_i32
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
c63ff55cc5 tcg/optimize: Split out fold_setcond
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
079b08040e tcg/optimize: Split out fold_brcond
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
764d2aba08 tcg/optimize: Split out fold_brcond2
Reduce some code duplication by folding the NE and EQ cases.

Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
bc47b1aa5b tcg/optimize: Split out fold_setcond2
Reduce some code duplication by folding the NE and EQ cases.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
2f9f08ba43 tcg/optimize: Split out fold_const{1,2}
Split out a whole bunch of placeholder functions, which are
currently identical.  That won't last as more code gets moved.

Use CASE_32_64_VEC for some logical operators that previously
missed the addition of vectors.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
3eefdf2b58 tcg/optimize: Split out fold_mb, fold_qemu_{ld,st}
This puts the separate mb optimization into the same framework
as the others.  While fold_qemu_{ld,st} are currently identical,
that won't last as more code gets moved.

Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
404a148d89 tcg/optimize: Use a boolean to avoid a mass of continues
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
137f1f4429 tcg/optimize: Split out finish_folding
Copy z_mask into OptContext, for writeback to the
first output within the new function.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
6b99d5bf38 tcg/optimize: Return true from tcg_opt_gen_{mov,movi}
This will allow callers to tail call to these functions
and return true indicating processing complete.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
8d57bf1e82 tcg/optimize: Change fail return for do_constant_folding_cond*
Return -1 instead of 2 for failure, so that we can
use comparisons against 0 for all cases.

Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
ec5d4cbeef tcg/optimize: Drop nb_oargs, nb_iargs locals
Rather than try to keep these up-to-date across folding,
re-read nb_oargs at the end, after re-reading the opcode.

A couple of asserts need dropping, but that will take care
of itself as we split the function further.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
5cf32be7d8 tcg/optimize: Split out fold_call
Calls are special in that they have a variable number
of arguments, and need to be able to clobber globals.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
8774dded02 tcg/optimize: Split out copy_propagate
Continue splitting tcg_optimize.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
e2577ea24f tcg/optimize: Split out init_arguments
There was no real reason for calls to have separate code here.
Unify init for calls vs non-calls using the call path, which
handles TCG_CALL_DUMMY_ARG.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
d0ed5151b1 tcg/optimize: Move prev_mb into OptContext
This will expose the variable to subroutines that
will be broken out of tcg_optimize.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
dc84988a5f tcg/optimize: Change tcg_opt_gen_{mov,movi} interface
Adjust the interface to take the OptContext parameter instead
of TCGContext or both.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
b10f38339b tcg/optimize: Remove do_default label
Break the final cleanup clause out of the main switch
statement.  When fully folding an opcode to mov/movi,
use "continue" to process the next opcode, else break
to fall into the final cleanup.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
3b3f847d75 tcg/optimize: Split out OptContext
Provide what will become a larger context for splitting
the very large tcg_optimize function.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
b1fde411d0 tcg/optimize: Rename "mask" to "z_mask"
Prepare for tracking different masks by renaming this one.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-27 17:11:22 -07:00
Richard Henderson
76e366e728 tcg: Canonicalize alignment flags in MemOp
Having observed e.g. al8+leq in dumps, canonicalize to al+leq.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-13 09:14:35 -07:00
Richard Henderson
d2ba802657 tcg: Move helper_*_mmu decls to tcg/tcg-ldst.h
These functions have been replaced by cpu_*_mmu as the
most proper interface to use from target code.

Hide these declarations from code that should not use them.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-13 08:46:42 -07:00
Richard Henderson
ea3f2af8f1 tcg/s390x: Implement TCG_TARGET_HAS_cmpsel_vec
This is via expansion; don't actually set TCG_TARGET_HAS_cmpsel_vec.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-05 16:53:17 -07:00
Richard Henderson
9bca986df8 tcg/s390x: Implement TCG_TARGET_HAS_bitsel_vec
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-05 16:53:17 -07:00
Richard Henderson
4223c9c1c6 tcg/s390x: Implement TCG_TARGET_HAS_sat_vec
The unsigned saturations are handled via generic code
using min/max.  The signed saturations are expanded using
double-sized arithmetic and a saturating pack.

Since all operations are done via expansion, do not
actually set TCG_TARGET_HAS_sat_vec.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-05 16:53:17 -07:00
Richard Henderson
220db7a6c4 tcg/s390x: Implement TCG_TARGET_HAS_minmax_vec
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-05 16:53:17 -07:00
Richard Henderson
22cb37b417 tcg/s390x: Implement vector shift operations
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-05 16:53:17 -07:00
Richard Henderson
479b61cbfa tcg/s390x: Implement TCG_TARGET_HAS_mul_vec
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-05 16:53:17 -07:00
Richard Henderson
ae77bbe574 tcg/s390x: Implement andc, orc, abs, neg, not vector operations
These logical and arithmetic operations are optional but trivial.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-05 16:53:17 -07:00
Richard Henderson
a429ee2978 tcg/s390x: Implement minimal vector operations
Implementing add, sub, and, or, xor as the minimal set.
This allows us to actually enable vectors in query_s390_facilities.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-05 16:53:17 -07:00
Richard Henderson
79cada8693 tcg/s390x: Implement tcg_out_dup*_vec
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-05 16:53:17 -07:00
Richard Henderson
b33ce7251c tcg/s390x: Implement tcg_out_mov for vector types
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-05 16:53:17 -07:00
Richard Henderson
2dabf74252 tcg/s390x: Implement tcg_out_ld/st for vector types
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-05 16:53:17 -07:00
Richard Henderson
34ef767609 tcg/s390x: Add host vector framework
Add registers and function stubs.  The functionality
is disabled via squashing s390_facilities[2] to 0.

We must still include results for the mandatory opcodes in
tcg_target_op_def, as all opcodes are checked during tcg init.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-05 16:53:17 -07:00
Richard Henderson
eee6251b48 tcg/s390x: Merge TCG_AREG0 and TCG_REG_CALL_STACK into TCGReg
They are rightly values in the same enumeration.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-05 16:53:17 -07:00
Richard Henderson
748b7f3ef7 tcg/s390x: Change FACILITY representation
We will shortly need to be able to check facilities beyond the
first 64.  Instead of explicitly masking against s390_facilities,
create a HAVE_FACILITY macro that indexes an array.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
v2: Change name to HAVE_FACILITY (david)
2021-10-05 16:53:17 -07:00
Richard Henderson
3704993f54 tcg/s390x: Rename from tcg/s390
This emphasizes that we don't support s390, only 64-bit s390x hosts.

Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-05 16:53:17 -07:00
Richard Henderson
2552d60ebd tcg: Expand usadd/ussub with umin/umax
For usadd, we only have to consider overflow.  Since ~B + B == -1,
the maximum value for A that saturates is ~B.

For ussub, we only have to consider underflow.  The minimum value
that saturates to 0 from A - B is B.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-05 16:53:17 -07:00
Richard Henderson
0583f775d2 trace: Split guest_mem_before
There is no point in encoding load/store within a bit of
the memory trace info operand.  Represent atomic operations
as a single read-modify-write tracepoint.  Use MemOpIdx
instead of inventing a form specifically for traces.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-05 16:53:17 -07:00
Richard Henderson
37aff08726 plugins: Reorg arguments to qemu_plugin_vcpu_mem_cb
Use the MemOpIdx directly, rather than the rearrangement
of the same bits currently done by the trace infrastructure.
Pass in enum qemu_plugin_mem_rw so that we are able to treat
read-modify-write operations as a single operation.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-05 16:53:17 -07:00
Richard Henderson
b0702c91c6 trace/mem: Pass MemOpIdx to trace_mem_get_info
We (will) often have the complete MemOpIdx handy, so use that.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-05 16:53:17 -07:00
Richard Henderson
9002ffcb72 tcg: Rename TCGMemOpIdx to MemOpIdx
We're about to move this out of tcg.h, so rename it
as we did when moving MemOp.

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-05 16:53:17 -07:00
Richard Henderson
4b473e0c60 tcg: Expand MO_SIZE to 3 bits
We have lacked expressive support for memory sizes larger
than 64-bits for a while.  Fixing that requires adjustment
to several points where we used this for array indexing,
and two places that develop -Wswitch warnings after the change.

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-10-05 16:53:17 -07:00
Richard Henderson
81c65ee223 tcg/riscv: Remove add with zero on user-only memory access
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-21 19:36:44 -07:00
Richard Henderson
220b2da7f3 tcg/sparc: Introduce tcg_out_mov_delay
This version of tcg_out_mov is emits a nop to fill the
delay slot if the move is not required.

The only current use, for INDEX_op_goto_ptr, will always
require the move but properly documents the delay slot.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-21 19:36:44 -07:00
Richard Henderson
897fd616fd tcg/sparc: Drop inline markers
Let the compiler decide about inlining.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-21 19:36:44 -07:00
Richard Henderson
3d1e8ed011 tcg/mips: Drop special alignment for code_gen_buffer
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-21 19:36:44 -07:00
Richard Henderson
5a8f0a5dd2 tcg/mips: Unset TCG_TARGET_HAS_direct_jump
Only use indirect jumps.  Finish weaning away from the
unique alignment requirements for code_gen_buffer.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-21 19:36:44 -07:00
Richard Henderson
d7fc9f48c3 tcg/mips: Allow JAL to be out of range in tcg_out_bswap_subr
Weaning off of unique alignment requirements, so allow JAL
to not reach the target.  TCG_TMP1 is always available for
use as a scratch because it is clobbered by the subroutine
being called.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-21 19:36:44 -07:00
Richard Henderson
10d4af5810 tcg/mips: Drop inline markers
Let the compiler decide about inlining.
Remove tcg_out_ext8s and tcg_out_ext16s as unused.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-21 19:36:44 -07:00
Richard Henderson
e028eada62 tcg/arm: More use of the TCGReg enum
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-14 12:00:21 -07:00
Richard Henderson
142fb62fd0 tcg/arm: More use of the ARMInsn enum
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-14 12:00:21 -07:00
Richard Henderson
1446600f7f tcg/arm: Give enum arm_cond_code_e a typedef and use it
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-14 12:00:21 -07:00
Richard Henderson
5f726ebce1 tcg/arm: Drop inline markers
Let the compiler decide about inlining.
Remove tcg_out_nop as unused.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-14 12:00:21 -07:00
Richard Henderson
90606715dc tcg/arm: Simplify usage of encode_imm
We have already computed the rotated value of the imm8
portion of the complete imm12 encoding.  No sense leaving
the combination of rot + rotation to the caller.

Create an encode_imm12_nofail helper that performs an assert.

This removes the final use of the local "rotl" function,
which duplicated our generic "rol32" function.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-14 12:00:21 -07:00
Richard Henderson
31d160adc9 tcg/arm: Split out tcg_out_ldstm
Expand these hard-coded instructions symbolically.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-14 12:00:21 -07:00
Richard Henderson
b87c1add03 tcg/arm: Support armv4t in tcg_out_goto and tcg_out_call
ARMv4T has BX as its only interworking instruction.  In order
to support testing of different architecture revisions with a
qemu binary that may have been built for, say ARMv6T2, fill in
the blank required to make calls to helpers in thumb mode.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-14 12:00:21 -07:00
Richard Henderson
4ae82ca7eb tcg/arm: Simplify use_armv5t_instructions
According to the Arm ARM DDI 0406C, section A1.3, the valid variants
are ARMv5T, ARMv5TE, ARMv5TEJ -- there is no ARMv5 without Thumb.
Therefore simplify the test from preprocessor ifdefs to base
architecture revision.  Retain the "t" in the name to minimize churn.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-14 12:00:21 -07:00
Richard Henderson
326b9669b0 tcg/arm: Standardize on tcg_out_<branch>_{reg,imm}
Some of the functions specified _reg, some _imm, and some
left it blank.  Make it clearer to which we are referring.

Split tcg_out_b_reg from tcg_out_bx_reg, to indicate when
we do not actually require BX semantics.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-14 12:00:21 -07:00
Richard Henderson
e0e1ad61f6 tcg/arm: Remove fallback definition of __ARM_ARCH
GCC since 4.8 provides the definition and we now require 7.5.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-14 12:00:21 -07:00
Jose R. Ziviani
421519d82c tcg/arm: Fix tcg_out_vec_op function signature
Commit 5e8892db93 fixed several function signatures but tcg_out_vec_op
for arm is missing. It causes a build error on armv6 and armv7:

tcg-target.c.inc:2718:42: error: argument 5 of type 'const TCGArg *'
{aka 'const unsigned int *'} declared as a pointer [-Werror=array-parameter=]
   const TCGArg *args, const int *const_args)
  ~~~~~~~~~~~~~~^~~~
../tcg/tcg.c:120:41: note: previously declared as an array 'const TCGArg[16]'
{aka 'const unsigned int[16]'}
   const TCGArg args[TCG_MAX_OP_ARGS],
  ~~~~~~~~~~~~~~^~~~

Signed-off-by: Jose R. Ziviani <jziviani@suse.de>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210908185338.7927-1-jziviani@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-14 12:00:20 -07:00
Richard Henderson
d216898563 tcg/ppc: Ensure _CALL_SYSV is set for 32-bit ELF
Clang only sets _CALL_ELF for ppc64, and nothing at all to specify
the ABI for ppc32.  Make a good guess based on other symbols.

Reported-by: Brad Smith <brad@comstyle.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-14 12:00:20 -07:00
Richard Henderson
2fa169ba61 tcg/ppc: Replace TCG_TARGET_CALL_DARWIN with _CALL_DARWIN
If __APPLE__, ensure that _CALL_DARWIN is set, then remove
our local TCG_TARGET_CALL_DARWIN.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-14 12:00:20 -07:00
Richard Henderson
fc88a52318 tcg/i386: Split P_VEXW from P_REXW
We need to be able to represent VEX.W on a 32-bit host, where REX.W
will always be zero.  Fixes the encoding for VPSLLVQ and VPSRLVQ.

Fixes: a2ce146a06 ("tcg/i386: Support vector variable shift opcodes")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/385
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-14 12:00:20 -07:00
Richard Henderson
84f1561629 accel/tcg: Add CF_NO_GOTO_TB and CF_NO_GOTO_PTR
Move the -d nochain check to bits on tb->cflags.
These will be used for more than -d nochain shortly.

Set bits during curr_cflags, test them in translator_use_goto_tb,
assert we're not doing anything odd in tcg_gen_goto_tb.  The test
in tcg_gen_exit_tb is redundant with the assert for goto_tb_issue_mask.

Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20210717221851.2124573-4-richard.henderson@linaro.org>
2021-07-21 07:47:04 -10:00
Richard Henderson
e28a866438 accel/tcg: Standardize atomic helpers on softmmu api
Reduce the amount of code duplication by always passing
the TCGMemOpIdx argument to helper_atomic_*.  This is not
currently used for user-only, but it's easy to ignore.

Tested-by: Cole Robinson <crobinso@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-21 07:45:38 -10:00
Peter Maydell
bd38ae26ce Add translator_use_goto_tb.
Cleanups in prep of breakpoint fixes.
 Misc fixes.
 -----BEGIN PGP SIGNATURE-----
 
 iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmDpvModHHJpY2hhcmQu
 aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV/1jgf+J1JMsPfxlSCwbbdc
 WEuWEcuKdcDFqhsePa6LaPYHTKuEEwavTG0kPbLIVZW2f6BTBeSYxAC6EWhq7pWo
 MGMhIOZM3fF0Yj+azuoybu9qxQ/K/aLM3GYt/OU00mvzturBezz+ka8MvWCrUwta
 XlhxhwnKsSP7lDWPBBjcdIIGiFJyxIRoU43giWaXrsvsc8ORJbmy7rgZfTKAit+w
 AvtQlc7TBi5nImz6f/KmEoy8mHEOhMf7czzo+v0u97lTiNK717/AHEwMfX9J585O
 GjlA9XmUUsNAciuLy48F1rHkgJxYAwo0G2shklpqPaOP5FctKm1reCSb8VEfAGaX
 Xq3UVA==
 =E9i/
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/rth-gitlab/tags/pull-tcg-20210710' into staging

Add translator_use_goto_tb.
Cleanups in prep of breakpoint fixes.
Misc fixes.

# gpg: Signature made Sat 10 Jul 2021 16:29:14 BST
# gpg:                using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg:                issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full]
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A  05C0 64DF 38E8 AF7E 215F

* remotes/rth-gitlab/tags/pull-tcg-20210710: (41 commits)
  cpu: Add breakpoint tracepoints
  tcg: Remove TCG_TARGET_HAS_goto_ptr
  accel/tcg: Log tb->cflags with -d exec
  accel/tcg: Split out log_cpu_exec
  accel/tcg: Move tb_lookup to cpu-exec.c
  accel/tcg: Move helper_lookup_tb_ptr to cpu-exec.c
  target/i386: Use cpu_breakpoint_test in breakpoint_handler
  tcg: Fix prologue disassembly
  target/xtensa: Use translator_use_goto_tb
  target/tricore: Use tcg_gen_lookup_and_goto_ptr
  target/tricore: Use translator_use_goto_tb
  target/sparc: Use translator_use_goto_tb
  target/sh4: Use translator_use_goto_tb
  target/s390x: Remove use_exit_tb
  target/s390x: Use translator_use_goto_tb
  target/rx: Use translator_use_goto_tb
  target/riscv: Use translator_use_goto_tb
  target/ppc: Use translator_use_goto_tb
  target/openrisc: Use translator_use_goto_tb
  target/nios2: Use translator_use_goto_tb
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-07-12 11:02:39 +01:00
Richard Henderson
f4e01e3021 tcg: Remove TCG_TARGET_HAS_goto_ptr
Since 6eea04347e, all tcg backends support goto_ptr.
Remove the conditional, making support mandatory.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-09 20:23:38 -07:00
Richard Henderson
d1c74ab3a1 tcg: Fix prologue disassembly
In tcg_region_prologue_set, we reset TCGContext.code_gen_ptr.
So do that after we've used it to dump the prologue contents.

Fixes: b0a0794a0f
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-09 19:45:42 -07:00
Philippe Mathieu-Daudé
a476123243 misc: Fix "havn't" typo
Fix "havn't (make)" -> "haven't (made)" typo.

Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20210629051400.2573253-1-f4bug@amsat.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2021-07-09 18:42:46 +02:00
Richard Henderson
a4390647f7 tcg: Move tb_phys_invalidate_count to tb_ctx
We can call do_tb_phys_invalidate from an iocontext, which has
no per-thread tcg_ctx.  Move this to tb_ctx, which is global.
The actual update still takes place with a lock held, so only
an atomic set is required, not an atomic increment.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/457
Tested-by: Viktor Ashirov <vashirov@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-09 09:38:33 -07:00
Liren Wei
834361efd9 tcg: Bake tb_destroy() into tcg_region_tree
The function is called only at tcg_gen_code() when duplicated TBs
are translated by different threads, and when the tcg_region_tree
is reset. Bake it into the underlying GTree as its value destroy
function to unite these situations.
Also remove tcg_region_tree_traverse() which now becomes useless.

Signed-off-by: Liren Wei <lrwei@bupt.edu.cn>
Message-Id: <8dc352f08d038c4e7a1f5f56962398cdc700c3aa.1625404483.git.lrwei@bupt.edu.cn>
[rth: Name the new tb_tc_cmp parameter correctly.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-09 09:38:33 -07:00
Richard Henderson
8973fe43bb tcg: Add separator in INDEX_op_call dump
We lost the ',' following the called function name.

Fixes: 3e92aa3443
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-07-09 09:38:18 -07:00
Richard Henderson
c86bd2dc4c tcg/riscv: Remove MO_BSWAP handling
TCG_TARGET_HAS_MEMORY_BSWAP is already unset for this backend,
which means that MO_BSWAP be handled by the middle-end and
will never be seen by the backend.  Thus the indexes used with
qemu_{ld,st}_helpers will always be zero.

Tidy the comments and asserts in tcg_out_qemu_{ld,st}_direct.
It is not that we do not handle bswap "yet", but never will.

Acked-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00
Richard Henderson
51c559c761 tcg/aarch64: Unset TCG_TARGET_HAS_MEMORY_BSWAP
The memory bswap support in the aarch64 backend merely dates from
a time when it was required.  There is nothing special about the
backend support that could not have been provided by the middle-end
even prior to the introduction of the bswap flags.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00
Richard Henderson
843b82424f tcg/arm: Unset TCG_TARGET_HAS_MEMORY_BSWAP
Now that the middle-end can replicate the same tricks as tcg/arm
used for optimizing bswap for signed loads and for stores, do not
pretend to have these memory ops in the backend.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00
Richard Henderson
b53357acb4 tcg: Make use of bswap flags in tcg_gen_qemu_st_*
By removing TCG_BSWAP_IZ we indicate that the input is
not zero-extended, and thus can remove an explicit extend.
By removing TCG_BSWAP_OZ, we allow the implementation to
leave high bits set, which will be ignored by the store.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00
Richard Henderson
359feba534 tcg: Make use of bswap flags in tcg_gen_qemu_ld_*
We can perform any required sign-extension via TCG_BSWAP_OS.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00
Richard Henderson
2b836c2ac1 tcg: Add flags argument to tcg_gen_bswap16_*, tcg_gen_bswap32_i64
Implement the new semantics in the fallback expansion.
Change all callers to supply the flags that keep the
semantics unchanged locally.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00
Richard Henderson
0b76ff8f1b tcg: Handle new bswap flags during optimize
Notice when the input is known to be zero-extended and force
the TCG_BSWAP_IZ flag on.  Honor the TCG_BSWAP_OS bit during
constant folding.  Propagate the input to the output mask.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00
Richard Henderson
0d57d36af5 tcg/tci: Support bswap flags
The existing interpreter zero-extends, ignoring high bits.
Simply add a separate sign-extension opcode if required.
Ensure that the interpreter supports ext16s when bswap16 is enabled.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00
Richard Henderson
1fce653440 tcg/mips: Support bswap flags in tcg_out_bswap32
Merge tcg_out_bswap32 and tcg_out_bswap32s.
Use the flags in the internal uses for loads and stores.

For mips32r2 bswap32 with zero-extension, standardize on
WSBH+ROTR+DEXT.  This is the same number of insns as the
previous DSBH+DSHD+DSRL but fits in better with the flags check.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00
Richard Henderson
27362b7b2c tcg/mips: Support bswap flags in tcg_out_bswap16
Merge tcg_out_bswap16 and tcg_out_bswap16s.  Use the flags
in the internal uses for loads and stores.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00
Richard Henderson
1619ee9e93 tcg/s390: Support bswap flags
For INDEX_op_bswap16_i64, use 64-bit instructions so that we can
easily provide the extension to 64-bits.  Drop the special case,
previously used, where the input is already zero-extended -- the
minor code size savings is not worth the complication.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00
Richard Henderson
780b573fce tcg/ppc: Use power10 byte-reverse instructions
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00
Richard Henderson
26ce70051b tcg/ppc: Support bswap flags
For INDEX_op_bswap32_i32, pass 0 for flags: input not zero-extended,
output does not need extension within the host 64-bit register.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00
Richard Henderson
674ba58803 tcg/ppc: Split out tcg_out_bswap64
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00
Richard Henderson
8a611d8640 tcg/ppc: Split out tcg_out_bswap32
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00
Richard Henderson
783d3ecdda tcg/ppc: Split out tcg_out_bswap16
With the use of a suitable temporary, we can use the same
algorithm when src overlaps dst.  The result is the same
number of instructions either way.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00
Richard Henderson
05dd01fa5a tcg/ppc: Split out tcg_out_sari{32,64}
We will shortly require sari in other context;
split out both for cleanliness sake.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00
Richard Henderson
f4bf14f401 tcg/ppc: Split out tcg_out_ext{8,16,32}s
We will shortly require these in other context;
make the expansion as clear as possible.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00
Richard Henderson
2ec89a78a5 tcg/arm: Support bswap flags
Combine the three bswap16 routines, and differentiate via the flags.
Use the correct flags combination from the load/store routines, and
pass along the constant parameter from tcg_out_op.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00
Richard Henderson
8fcfc6bff6 tcg/aarch64: Support bswap flags
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00
Richard Henderson
dfa24dfa09 tcg/aarch64: Merge tcg_out_rev{16,32,64}
Pass in the input and output size.  We currently use 3 of the 5
possible combinations; the others may be used by new tcg opcodes.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00
Richard Henderson
7335a3d69f tcg/i386: Support bswap flags
Retain the current rorw bswap16 expansion for the zero-in/zero-out case.
Otherwise, perform a wider bswap plus a right-shift or extend.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00
Richard Henderson
587195bd59 tcg: Add flags argument to bswap opcodes
This will eventually simplify front-end usage, and will allow
backends to unset TCG_TARGET_HAS_MEMORY_BSWAP without loss of
optimization.

The argument is added during expansion, not currently exposed to the
front end translators.  The backends currently only support a flags
value of either TCG_BSWAP_IZ, or (TCG_BSWAP_IZ | TCG_BSWAP_OZ),
since they all require zero top bytes and leave them that way.
At the existing call sites we pass in (TCG_BSWAP_IZ | TCG_BSWAP_OZ),
except for the flags-ignored cases of a 32-bit swap of a 32-bit
value and or a 64-bit swap of a 64-bit value, where we pass 0.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00
LIU Zhiwei
950ee59026 tcg: Add tcg_gen_vec_shl{shr}{sar}8i_i32
Implement tcg_gen_vec_shl{shr}{sar}8i_tl by adding corresponging i32 OP.

Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
Message-Id: <20210624105023.3852-5-zhiwei_liu@c-sky.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00
LIU Zhiwei
04f2a8bbc0 tcg: Add tcg_gen_vec_shl{shr}{sar}16i_i32
Implement tcg_gen_vec_shl{shr}{sar}16i_tl by adding corresponging i32 OP.

Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
Message-Id: <20210624105023.3852-4-zhiwei_liu@c-sky.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00
LIU Zhiwei
448e7aa28c tcg: Add tcg_gen_vec_add{sub}8_i32
Implement tcg_gen_vec_add{sub}8_tl by adding corresponging i32 OP.

Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
Message-Id: <20210624105023.3852-3-zhiwei_liu@c-sky.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00
LIU Zhiwei
3d066e5d80 tcg: Add tcg_gen_vec_add{sub}16_i32
Implement tcg_gen_vec_add{sub}16_tl by adding corresponding i32 OP.

Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
Message-Id: <20210624105023.3852-2-zhiwei_liu@c-sky.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:56 -07:00
Peter Maydell
ecba223da6 target-arm queue:
* Don't require 'virt' board to be compiled in for ACPI GHES code
  * docs: Document which architecture extensions we emulate
  * Fix bugs in M-profile FPCXT_NS accesses
  * First slice of MVE patches
  * Implement MTE3
  * docs/system: arm: Add nRF boards description
 -----BEGIN PGP SIGNATURE-----
 
 iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmDUj7QZHHBldGVyLm1h
 eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3jffEACAqLxGeZ9ybE9JOr6Nryxf
 fXCO6h2/3OR9jlixhMXMgksbjC82Z02kE/ywBUQ1OpgxlGiRcRBWNrhFBdiATXTS
 KhFiD+KZwiguTXgSm4VrFFWru7UOyj+kQIiNwEHYRs6iG/zZYamdQilK9gvWqR+M
 smXf6/tj+U5s9y53ZFSdCnZMOsdNcrwEN8VgMUwxDlB2/HsM9bg2eymSs5C4lJXF
 /H3ZjjmHUzeYUma5NXlDORu9ri2OxRYdXxLHeHwEmw1MZE8J8kwbnbGYdpC490o4
 nCIqJJGNq9K+jw6oFWKitzjOlvZBzx4+vbX0g0BCRd3g+oviBCfKazOSBrQM1AuI
 iGSsdfuaNMcv07O+pAE/WPrqtR2hvTVVXX4j9f9rTDNyqNjja7t3hnsyW9+KyQrZ
 Rl3Ha5YBH+Upe1TF6MV7gE4z07vjjD6Xem5HNHBcOP91WnK/sw1yOtFfl6cQlLcr
 ukUhHu+Il5FErSityZfgx25hI2Cin2oBgnleAbe5DKaWFt5cMPwGpb0GnIjOeilr
 O7KzC8LejTPssBRYndpvxvhgfFTXsws4bxMal/RBLTFuLAg7D/hrTBTUzxSnq+hw
 b9Keewj646vfEY+g/B/B02kT8NVnial5MOqkqL1I87r2BNOjbY6R7T5UBDYl8kP8
 Ph4lpz+ECQL5N04t9MhwhA==
 =6zaW
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20210624' into staging

target-arm queue:
 * Don't require 'virt' board to be compiled in for ACPI GHES code
 * docs: Document which architecture extensions we emulate
 * Fix bugs in M-profile FPCXT_NS accesses
 * First slice of MVE patches
 * Implement MTE3
 * docs/system: arm: Add nRF boards description

# gpg: Signature made Thu 24 Jun 2021 14:59:16 BST
# gpg:                using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg:                issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate]
# gpg:                 aka "Peter Maydell <pmaydell@gmail.com>" [ultimate]
# gpg:                 aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate]
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83  15CF 3C25 25ED 1436 0CDE

* remotes/pmaydell/tags/pull-target-arm-20210624: (57 commits)
  docs/system: arm: Add nRF boards description
  target/arm: Implement MTE3
  target/arm: Make VMOV scalar <-> gpreg beatwise for MVE
  target/arm: Implement MVE VADDV
  target/arm: Implement MVE VHCADD
  target/arm: Implement MVE VCADD
  target/arm: Implement MVE VADC, VSBC
  target/arm: Implement MVE VRHADD
  target/arm: Implement MVE VQDMULL (vector)
  target/arm: Implement MVE VQDMLSDH and VQRDMLSDH
  target/arm: Implement MVE VQDMLADH and VQRDMLADH
  target/arm: Implement MVE VRSHL
  target/arm: Implement MVE VSHL insn
  target/arm: Implement MVE VQRSHL
  target/arm: Implement MVE VQSHL (vector)
  target/arm: Implement MVE VQADD, VQSUB (vector)
  target/arm: Implement MVE VQDMULH, VQRDMULH (vector)
  target/arm: Implement MVE VQDMULL scalar
  target/arm: Implement MVE VQDMULH and VQRDMULH (scalar)
  target/arm: Implement MVE VQADD and VQSUB
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-06-24 15:00:34 +01:00
Peter Maydell
614dd4f3ba tcg: Make gen_dup_i32/i64() public as tcg_gen_dup_i32/i64
The Arm MVE VDUP implementation would like to be able to emit code to
duplicate a byte or halfword value into an i32.  We have code to do
this already in tcg-op-gvec.c, so all we need to do is make the
functions global.

For consistency with other functions made available to the frontends:
 * we rename to tcg_gen_dup_*
 * we expose both the _i32 and _i64 forms
 * we provide the #define for a _tl form

Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20210617121628.20116-10-peter.maydell@linaro.org
2021-06-21 17:12:50 +01:00
Richard Henderson
732d58979c tcg: Restart when exhausting the stack frame
Assume that we'll have fewer temps allocated after
restarting with a fewer number of instructions.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-19 14:51:51 -07:00
Richard Henderson
c1c091948a tcg: Allocate sufficient storage in temp_allocate_frame
This function should have been updated for vector types
when they were introduced.

Fixes: d2fd745fe8
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/367
Cc: qemu-stable@nongnu.org
Tested-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-19 14:51:46 -07:00