Commit Graph

285 Commits

Author SHA1 Message Date
Aurelien Jarno
4f2331e5b6 tcg: implement real ext_i32_i64 and extu_i32_i64 ops
Implement real ext_i32_i64 and extu_i32_i64 ops. They ensure that a
32-bit value is always converted to a 64-bit value and not propagated
through the register allocator or the optimizer.

Cc: Andrzej Zaborowski <balrogg@gmail.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: Blue Swirl <blauwirbel@gmail.com>
Cc: Stefan Weil <sw@weilnetz.de>
Acked-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24 11:10:54 -07:00
Aurelien Jarno
0632e555fc tcg: rename trunc_shr_i32 into trunc_shr_i64_i32
The op is sometimes named trunc_shr_i32 and sometimes trunc_shr_i64_i32,
and the name in the README doesn't match the name offered to the
frontends.

Always use the long name to make it clear it is a size changing op.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24 11:10:54 -07:00
Richard Henderson
80adb8fcad tcg/aarch64: use 32-bit offset for 32-bit softmmu emulation
Similar to the same fix for user-mode, except this instance
occurs on the softmmu path.  Again, the tlb addend must be
the base register, while the guest address is the index.

Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-07-23 20:19:44 -07:00
Paolo Bonzini
ffc6372851 tcg/aarch64: use 32-bit offset for 32-bit user-mode emulation
Thanks to the previous patch, it is now easy for tcg_out_qemu_ld and
tcg_out_qemu_st to use a 32-bit zero extended offset.  However, the
guest base register x28 must be the base and addr_reg must be the
index.

Reported-by: Leon Alrae <leon.alrae@imgtec.com>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <1436974021-28978-3-git-send-email-pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-07-23 15:09:12 -07:00
Paolo Bonzini
6c0f0c0f12 tcg/aarch64: add ext argument to tcg_out_insn_3310
The new argument lets you pick uxtw or uxtx mode for the offset
register.  For now, all callers pass TCG_TYPE_I64 so that uxtx
is generated.  The bits for uxtx are removed from I3312_TO_I3310.

Reported-by: Leon Alrae <leon.alrae@imgtec.com>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <1436974021-28978-2-git-send-email-pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-07-23 15:09:04 -07:00
Richard Henderson
2b7ec66f02 tcg: Mask TCGMemOp appropriately for indexing
The addition of MO_AMASK means that places that used inverted masks
need to be changed to use positive masks, and places that failed to
mask the intended bits need updating.

Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Tested-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-06-09 06:35:29 -07:00
Paolo Bonzini
006f8638c6 tcg: add TCG_TARGET_TLB_DISPLACEMENT_BITS
This will be used to size the TLB when more than 8 MMU modes are
used by the target.  Limitations come from the limited size of
the immediate fields (which sometimes, as in the case of Aarch64,
extend to instructions that shift the immediate).

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <1424436345-37924-2-git-send-email-pbonzini@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
2015-06-03 23:56:56 +02:00
Richard Henderson
3972ef6f83 tcg: Push merged memop+mmu_idx parameter to softmmu routines
The extra information is not yet used but it is now available.
This requires minor changes through all of the tcg backends.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-05-14 12:15:14 -07:00
Richard Henderson
59227d5d45 tcg: Merge memop and mmu_idx parameters to qemu_ld/st
At the tcg opcode level, not at the tcg-op.h generator level.
This requires minor changes through all of the tcg backends,
but none of the cpu translators.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-05-14 12:14:55 -07:00
Richard Henderson
bec1631100 tcg: Change generator-side labels to a pointer
This is less about improved type checking than enabling a
subsequent change to the representation of labels.

Acked-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
Cc: Andrzej Zaborowski <balrogg@gmail.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Blue Swirl <blauwirbel@gmail.com>
Cc: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-03-13 12:28:18 -07:00
Richard Henderson
9c53889ba3 tcg-aarch64: Use 32-bit loads for qemu_ld_i32
The "old" qemu_ld opcode did not specify the size of the result,
and so we had to assume full register width.  With the new opcodes,
we can narrow the result.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-09-29 14:55:28 -04:00
Richard Henderson
3d1b2ff62c tcg: Remove TCG_TARGET_HAS_new_ldst
Since all backends have been converted, remove the compatibility code.

Acked-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-04 14:10:26 -07:00
Richard Henderson
3d9bddb30b tcg-aarch64: Make debug_frame const
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-28 09:33:55 -07:00
Richard Henderson
96d0ee7f09 tcg: Remove unreachable code in tcg_out_op and op_defs
The INDEX_op_call case has just been obsoleted; the mov and movi
cases have not been reachable for years.  Attempt to document this
both in each tcg_out_op switch, and via TCG_OPF_NOT_PRESENT.

Because of the TCG_OPF_NOT_PRESENT change, this must be done for
all targets in a single commit.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 11:13:13 -07:00
Richard Henderson
8587c30c3e tcg-aarch64: Define TCG_TARGET_INSN_UNIT_SIZE
And use tcg pointer differencing functions as appropriate.

Acked-by: Claudio Fontana <claudio.fontana@huawei.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:06:52 -07:00
Richard Henderson
4bb7a41ed6 tcg: Add INDEX_op_trunc_shr_i32
Let the backend do something special for truncation.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-28 11:06:34 -07:00
Richard Henderson
02eb19d0ec tcg: Use HOST_WORDS_BIGENDIAN
Instead of rolling a local TCG_TARGET_WORDS_BIGENDIAN.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:37 -07:00
Richard Henderson
170bf9315b tcg-aarch64: Remove w constraint
Now redundant with the type parameter to tcg_target_const_match.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:36 -07:00
Richard Henderson
f6c6afc1d4 tcg: Add TCGType parameter to tcg_target_const_match
Most 64-bit targets need to be able to ignore the high bits
of a TCG_TYPE_I32 value.

Suggested-by: Stuart Brady <sdb@zubnet.me.uk>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:36 -07:00
Stefan Weil
ad5171dbd4 tcg: Fix warning (1 bit signed bitfield entry) and replace int by bool
Static code analyzers complain about signed bitfields with only a single
bit. is_ld is used as a boolean value, so make it bool.

ppc64 already used bool for the 2nd argument is_ld of the local function
add_qemu_ldst_label. Modify all other TCG targets to do follow this
example.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:36 -07:00
Richard Henderson
b825025f08 tcg-aarch64: Use tcg_out_mov in preference to tcg_out_movr
It's the more canonical interface.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:02 -04:00
Richard Henderson
a056c9faa4 tcg-aarch64: Prefer unsigned offsets before signed offsets for ldst
The assembler seems to prefer them, perhaps we should too.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:01 -04:00
Richard Henderson
3d4299f425 tcg-aarch64: Introduce tcg_out_insn_3312, _3310, _3313
Replace aarch64_ldst_op_data with AArch64LdstType, as it wasn't encoded
for the proper shift for the field and was confusing.

Merge aarch64_ldst_op_data, AArch64LdstType, and a few stray opcode bits
into a single I3312_* argument, eliminating some magic numbers from the
helper functions.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:01 -04:00
Richard Henderson
dc73dfd4bc tcg-aarch64: Merge aarch64_ldst_get_data/type into tcg_out_op
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:01 -04:00
Richard Henderson
edd8824cd4 tcg-aarch64: Introduce tcg_out_insn_3507
Cleaning up the implementation of REV and REV16 at the same time.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:01 -04:00
Richard Henderson
e81864a109 tcg-aarch64: Support stores of zero
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:01 -04:00
Richard Henderson
de61d14fa7 tcg-aarch64: Implement TCG_TARGET_HAS_new_ldst
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:01 -04:00
Richard Henderson
667b1cdd4e tcg-aarch64: Pass qemu_ld/st arguments directly
Instead of passing them the "args" array.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:00 -04:00
Richard Henderson
9e4177ad6d tcg-aarch64: Use TCGMemOp in qemu_ld/st
Making the bswap conditional on the memop instead of a compile-time test.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:00 -04:00
Richard Henderson
dc0c8aaf2c tcg-aarch64: Use ADR to pass the return address to the ld/st helpers
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:00 -04:00
Richard Henderson
ae7ab46aa8 tcg-aarch64: Use tcg_out_call for qemu_ld/st
In some cases, a direct branch will be in range.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:00 -04:00
Richard Henderson
6f4724672c tcg-aarch64: Avoid add with zero in tlb load
Some guest env are small enough to reach the tlb with only a 12-bit addition.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:00 -04:00
Richard Henderson
38d195aa05 tcg-aarch64: Implement tcg_register_jit
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:13:00 -04:00
Richard Henderson
95f72aa90a tcg-aarch64: Introduce tcg_out_insn_3314
Combines 4 other inline functions and tidies the prologue.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:59 -04:00
Richard Henderson
d82b78e48b tcg-aarch64: Reuse LR in translated code
It's obviously call-clobbered, but is otherwise unused.
Repurpose it as the TCG temporary.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:59 -04:00
Richard Henderson
3d9e69a238 tcg-aarch64: Use CBZ and CBNZ
A compare and branch against zero happens at the start of
every single TB.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:59 -04:00
Richard Henderson
cae1f6f3e6 tcg-aarch64: Create tcg_out_brcond
Rearrange code to put the compare and branch in the same place.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:59 -04:00
Richard Henderson
81d8a5ee19 tcg-aarch64: Use symbolic names for branches
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:59 -04:00
Richard Henderson
c6e310d938 tcg-aarch64: Use adrp in tcg_out_movi
Loading an qemu pointer as an immediate happens often.  E.g.

- exit_tb $0x7fa8140013
+ exit_tb $0x7f81ee0013
...
- :  d2800260        mov     x0, #0x13
- :  f2b50280        movk    x0, #0xa814, lsl #16
- :  f2c00fe0        movk    x0, #0x7f, lsl #32
+ :  90ff1000        adrp    x0, 0x7f81ee0000
+ :  91004c00        add     x0, x0, #0x13

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:58 -04:00
Richard Henderson
d8918df577 tcg-aarch64: Special case small constants in tcg_out_movi
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:58 -04:00
Richard Henderson
4ec4f0bd56 tcg-aarch64: Use ORRI in tcg_out_movi
The subset of logical immediates that we support is quite quick to test,
and such constants are quite common to want to load.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:58 -04:00
Richard Henderson
dfeb5fe770 tcg-aarch64: Use MOVN in tcg_out_movi
When profitable, initialize the register with MOVN instead of MOVZ,
before setting the remaining lanes with MOVK.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:58 -04:00
Richard Henderson
929f8b5550 tcg-aarch64: Use TCGType and TCGMemOp constants
Rather than raw constants that could mean anything.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:58 -04:00
Richard Henderson
8bf56493f1 tcg-aarch64: Use intptr_t apropriately
As opposed to tcg_target_long.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16 12:12:58 -04:00
Richard Henderson
582ab779c5 tcg-aarch64: Introduce tcg_out_insn_3405
Cleaning up the implementation of tcg_out_movi at the same time.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 11:00:15 -07:00
Richard Henderson
8678b71ce6 tcg-aarch64: Support div, rem
Clean up multiply at the same time.

For remainder, generic code will produce mul+sub,
whereas we can implement with msub.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 11:00:10 -07:00
Richard Henderson
1fcc9ddfb3 tcg-aarch64: Support muluh, mulsh
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 11:00:07 -07:00
Richard Henderson
c6e929e784 tcg-aarch64: Support add2, sub2
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 11:00:04 -07:00
Richard Henderson
b3c56df769 tcg-aarch64: Support deposit
Also tidy the implementation of ubfm, sbfm, extr in order to share code.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 11:00:01 -07:00
Richard Henderson
ed7a0aa8bc tcg-aarch64: Use tcg_out_insn for setcond
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 10:59:58 -07:00
Richard Henderson
04ce397b33 tcg-aarch64: Support movcond
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 10:59:55 -07:00
Richard Henderson
14b155ddc4 tcg-aarch64: Support andc, orc, eqv, not, neg
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 10:59:52 -07:00
Richard Henderson
e029f29385 tcg-aarch64: Handle constant operands to and, or, xor
Handle a simplified set of logical immediates for the moment.

The way gcc and binutils do it, with 52k worth of tables, and
a binary search depth of log2(5334) = 13, seems slow for the
most common cases.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 10:59:47 -07:00
Richard Henderson
90f1cd9138 tcg-aarch64: Handle constant operands to add, sub, and compare
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 10:59:44 -07:00
Richard Henderson
7d11fc7c2b tcg-aarch64: Implement mov with tcg_out_insn
Avoid the magic numbers in the current implementation.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 10:59:41 -07:00
Richard Henderson
096c46c0ff tcg-aarch64: Introduce tcg_out_insn_3401
This merges the implementation of tcg_out_addi and tcg_out_subi.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 10:59:38 -07:00
Richard Henderson
df9351e372 tcg-aarch64: Convert shift insns to tcg_out_insn
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 10:59:35 -07:00
Richard Henderson
50573c66eb tcg-aarch64: Introduce tcg_out_insn
Converting the add/sub (3.5.2) and logical shifted (3.5.10) instruction
groups to the new scheme.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14 10:59:13 -07:00
Richard Henderson
f8e2484389 tcg-aarch64: Remove nop from qemu_st slow path
Commit 023261ef85 failed to remove a
nop that's no longer required.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08 21:23:25 -08:00
Richard Henderson
523fdc08cc tcg-aarch64: Simplify tcg_out_ldst_9 encoding
At first glance the code appears to be using 1's compliment encoding,
a-la AArch32.  Except that the constant is "off", creating a complicated
split field 2's compliment encoding.

Much clearer to just use a normal mask and shift.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08 21:23:25 -08:00
Richard Henderson
017a86f7ad tcg-aarch64: Use intptr_t apropriately
As opposed to tcg_target_long.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08 21:23:25 -08:00
Richard Henderson
2e796c7621 tcg-aarch64: Remove the shift_imm parameter from tcg_out_cmp
It was unused.  Let's not overcomplicate things before we need them.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08 21:23:25 -08:00
Richard Henderson
8d8db193f2 tcg-aarch64: Hoist common argument loads in tcg_out_op
This reduces the code size of the function significantly.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08 21:23:16 -08:00
Richard Henderson
a51a6b6ad5 tcg-aarch64: Don't handle mov/movi in tcg_out_op
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08 21:23:15 -08:00
Richard Henderson
f029341494 tcg-aarch64: Set ext based on TCG_OPF_64BIT
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08 21:23:09 -08:00
Richard Henderson
7763ffa017 tcg-aarch64: Change all ext variables to TCGType
We assert that the values for _I32 and _I64 are 0 and 1 respectively.
This will make a couple of functions declared by tcg.c cleaner.

Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08 21:23:09 -08:00
Richard Henderson
3353d0dcc3 tcg-aarch64: Remove redundant CPU_TLB_ENTRY_BITS check
Removed from other targets in 56bbc2f967.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08 21:23:02 -08:00
Richard Henderson
f713d6ad7b tcg: Add qemu_ld_st_i32/64
Step two in the transition, adding the new ldst opcodes.  Keep the old
opcodes around until all backends support the new opcodes.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-10 13:19:21 -07:00
Richard Henderson
9ecefc84dd tcg: Add tcg-be-ldst.h
Move TCGLabelQemuLdst and related stuff out of tcg.h.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-10 11:44:26 -07:00
Richard Henderson
023261ef85 tcg-aarch64: Update to helper_ret_*_mmu routines
A minimal update to use the new helpers with the return address argument.

Tested-by: Claudio Fontana <claudio.fontana@linaro.org>
Reviewed-by: Claudio Fontana <claudio.fontana@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-10 11:44:25 -07:00
Richard Henderson
e58eb53413 exec: Split softmmu_defs.h
The _cmmu helpers can be moved to exec-all.h.  The helpers that are
used from TCG will shortly need access to tcg_target_long so move
their declarations into tcg.h.

This requires minor include adjustments to all TCG backends.

Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-09-02 09:08:30 -07:00
Richard Henderson
a05b5b9be0 tcg: Change tcg_out_ld/st offset to intptr_t
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-09-02 09:08:30 -07:00
Richard Henderson
2ba7fae29e tcg: Change relocation offsets to intptr_t
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-09-02 09:08:29 -07:00
Richard Henderson
b93949ef6a tcg: Change flush_icache_range arguments to uintptr_t
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-09-02 09:08:29 -07:00
Richard Henderson
03271524b6 tcg: Add muluh and mulsh opcodes
Use them in places where mulu2 and muls2 are used.
Optimize mulx2 with dead low part to mulxh.

Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-09-02 09:08:29 -07:00
Richard Henderson
f290e4988d Merge git://github.com/hw-claudio/qemu-aarch64-queue into tcg-next 2013-07-15 13:21:10 -07:00
Jani Kokkonen
c6d8ed24b4 tcg/aarch64: Implement tlb lookup fast path
Supports CONFIG_QEMU_LDST_OPTIMIZATION

Signed-off-by: Jani Kokkonen <jani.kokkonen@huawei.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
2013-07-15 13:13:46 +02:00
Richard Henderson
ca675f46e6 tcg: Split rem requirement from div requirement
There are several hosts with only a "div" insn.  Remainder is computed
manually from the quotient and inputs.  We can do this generically.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-07-09 07:14:09 -07:00
Claudio Fontana
b1f6dc0d2a tcg/aarch64: implement ldst 12bit scaled uimm offset
implement the 12bit scaled unsigned immediate offset
variant of LDR/STR. This improves code size by avoiding
the movi + ldst_r for naturally aligned offsets in range.

Signed-off-by: Claudio Fontana <claudio.fontana@huawei.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2013-07-03 14:43:11 +02:00
Jani Kokkonen
6a91c7c978 tcg/aarch64: implement user mode qemu ld/st
also put aarch64 in the list of archs that do not need an ldscript.

Signed-off-by: Jani Kokkoken <jani.kokkonen@huawei.com>
Signed-off-by: Claudio Fontana <claudio.fontana@huawei.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 51AF40EE.1000104@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2013-06-12 16:20:23 +01:00
Claudio Fontana
31f1275b90 tcg/aarch64: implement sign/zero extend operations
implement the optional sign/zero extend operations with the dedicated
aarch64 instructions.

Signed-off-by: Claudio Fontana <claudio.fontana@huawei.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 51AC9A58.40502@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2013-06-12 16:20:23 +01:00
Claudio Fontana
9c4a059df3 tcg/aarch64: implement byte swap operations
implement the optional byte swap operations with the dedicated
aarch64 instructions.

Signed-off-by: Claudio Fontana <claudio.fontana@huawei.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 51AC9A33.9050003@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2013-06-12 16:20:23 +01:00
Claudio Fontana
7deea126b2 tcg/aarch64: implement AND/TEST immediate pattern
add functions to AND/TEST registers with immediate patterns.

Signed-off-by: Claudio Fontana <claudio.fontana@huawei.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 51AC9A0C.3090303@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2013-06-12 16:20:22 +01:00
Claudio Fontana
36fac14a64 tcg/aarch64: improve arith shifted regs operations
for arith operations, add SUBS, ANDS, ADDS and add a shift parameter
so that all arith instructions can make use of shifted registers.

Signed-off-by: Claudio Fontana <claudio.fontana@huawei.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 51AC998B.7070506@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2013-06-12 16:20:22 +01:00
Claudio Fontana
4a136e0a6b tcg/aarch64: implement new TCG target for aarch64
add preliminary support for TCG target aarch64.

Signed-off-by: Claudio Fontana <claudio.fontana@huawei.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 51A5C596.3090108@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2013-06-12 16:20:22 +01:00