Good enough to run some instructions before things go awry.
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Now passes tcg_add_target_add_op_defs assertions, but
not complete enough to function.
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Just enough to compile, assuming you edit config-host.mak manually.
It will still abort at runtime, due to missing brcond2, setcond2, mulu2.
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The new ELFv2 ABI, used by default on powerpc64le-linux hosts,
introduced some changes that are incompatible with code currently
generated by the ppc64 TGC target. In particular, we no longer
use function descriptors.
This patch adds support for the ELFv2 ABI in the ppc64 TGC
function call and function prologue sequences.
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Ulrich Weigand <ulrich.weigand@de.ibm.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The correct test uses the _CALL_AIX macro, not a host-specific macro.
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The calling convention reserves space for the 8 register parameters on
the stack, so using only 6*8=48 as the offset was wrong. We never saw
this bug because we don't have any helpers with more than 5 parameters.
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
These values are private to tcg.c; we don't need to expose
this nonsense to the translators.
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Rather than using tcg_out32 and opcodes directly. This allows us
to remove LD_ADDR and CMP_L macros.
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
In order to be able to use tcg_out_ld/st sensibly with scratch
registers, assert only when we'd incorrectly clobber a scratch.
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Code movement only. This will allow us to make use of the
other tcg_out_* functions in tidying their implementations.
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
With the "old" ldst ops we didn't know the real width of the
result of the load, but with the "new" ldst ops we do.
Signed-off-by: Richard Henderson <rth@twiddle.net>
* remotes/bonzini/softmmu-smap: (33 commits)
target-i386: cleanup x86_cpu_get_phys_page_debug
target-i386: fix protection bits in the TLB for SMEP
target-i386: support long addresses for 4MB pages (PSE-36)
target-i386: raise page fault for reserved bits in large pages
target-i386: unify reserved bits and NX bit check
target-i386: simplify pte/vaddr calculation
target-i386: raise page fault for reserved physical address bits
target-i386: test reserved PS bit on PML4Es
target-i386: set correct error code for reserved bit access
target-i386: introduce support for 1 GB pages
target-i386: introduce do_check_protect label
target-i386: tweak handling of PG_NX_MASK
target-i386: commonize checks for PAE and non-PAE
target-i386: commonize checks for 4MB and 4KB pages
target-i386: commonize checks for 2MB and 4KB pages
target-i386: fix coding standards in x86_cpu_handle_mmu_fault
target-i386: simplify SMAP handling in MMU_KSMAP_IDX
target-i386: fix kernel accesses with SMAP and CPL = 3
target-i386: move check_io helpers to seg_helper.c
target-i386: rename KSMAP to KNOSMAP
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Unify pieces of cpu-all.h, exec-all.h, softmmu_exec.h and tcg/tcg.h
into a single new header file with all helpers.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We expose a generic helper "tcg_gen_extr_i64_tl" for 64bit targets, but the
same function for 32bit targets is a misnomer and refers to an invalid function
name.
Fix up the definition to point to the correct internal helper names instead.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Since all backends have been converted, remove the compatibility code.
Acked-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The first non-register argument isn't placed at offset 0.
Cc: qemu-stable@nongnu.org
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
For a 64-bit host, the high bits of a register after a 32-bit operation
are undefined. Adjust the temps mask for all 32-bit ops to reflect that.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Adjust the FDE to point to the code_buffer after we've copied it
to the image, rather than requiring that the backend set it prior.
This allows the backend to use read-only storage for its data.
Signed-off-by: Richard Henderson <rth@twiddle.net>
This will let us find all the info from the hash table.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Rather than special casing them, use the standard mechanisms
for tcg helper generation.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Rather than include helper.h with N values of GEN_HELPER, include a
secondary file that sets up the macros to include helper.h. This
minimizes the files that must be rebuilt when changing the macros
for file N.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
If either the high or low pair can be resolved, we can
simplify to either a constant or to a 32-bit comparison.
Signed-off-by: Richard Henderson <rth@twiddle.net>
* remotes/rth/tcg-mips: (24 commits)
tcg-mips: Enable direct chaining of TBs
tcg-mips: Simplify movcond
tcg-mips: Simplify brcond2
tcg-mips: Improve setcond eq/ne vs zeros
tcg-mips: Simplify setcond2
tcg-mips: Simplify brcond
tcg-mips: Simplify setcond
tcg-mips: Commonize opcode implementations
tcg-mips: Improve add2/sub2
tcg-mips: Hoist args loads
tcg-mips: Fix subtract immediate range
tcg-mips: Name the opcode enumeration
tcg-mips: Use EXT for AND on mips32r2
tcg-mips: Use T9 for TCG_TMP1
tcg-mips: Introduce TCG_TMP0, TCG_TMP1
tcg-mips: Rearrange register allocation
tcg-mips: Convert to new_ldst
tcg-mips: Convert to new qemu_l/st helpers
tcg-mips: Move softmmu slow path out of line
tcg-mips: Split large ldst offsets
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Now that the code_gen_buffer is constrained to not cross 256mb
regions, we are assured that we can use J to reach another TB.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Use the same table to fold comparisons as with setcond.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Emitting a single branch instead of (up to) 3, using setcond2
to generate the composite compare.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The original code results in one too many insns per zero
present in the input. And since comparing 64-bit numbers
vs zero is common...
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Using tcg_unsigned_cond and tcg_high_cond.
Also, move the function up in the file for future cleanups.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Use the same table to fold comparisons as with setcond.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Use a table to fold comparisons to less-than.
Also, move the function up in the file for futher simplifications.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Most opcodes fall in to one of a couple of patterns.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Since we must use ADDUI, we would generate incorrect code for -32768.
Leaving off subtract of +32768 makes things easier for a follow-on patch.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
At the same time, tidy deposit by introducing tcg_out_opc_bf.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
T0 is an argument register for the n32 and n64 abis. T9 is the call
address register for the abis, and is more directly under the control
of the backend.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Use these instead of hard-coding the registers to use for temporaries.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Use FP (also known as S8) as a normal call-saved register.
Include T0 in the allocation order and call-clobbered list
even though it's currently used as a TCG temporary.
Put the argument registers at the end of the allocation order.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
In addition, fill delay slots calling the helpers and tail
call to the store helpers.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
At the same time, tidy up the call helpers, avoiding a memory reference.
Split out several subroutines. Use TCGMemOp constants. Make endianness
selectable at runtime.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
For userland builds calls will normally be in range,
and for the exit_tb opcode the branch to the epilogue.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Broken since dddbb2e1e3.
Do all the rest of the things that tcg_out_op did before
and after the big switch statement.
Signed-off-by: Richard Henderson <rth@twiddle.net>
There are a variety of common cases for which we can use carry tricks to
avoid a conditional branch. On very new hardware, use LOAD ON CONDITION
instead of a conditional branch.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Elides two insns from the sequence. The resulting tlb compare
sequence is satisfyingly minimal:
risbg %r2,%r8,51,186,56
risbg %r3,%r8,61,178,0
cg %r3,904(%r10,%r2)
lg %r2,920(%r10,%r2)
jlh tlb_miss
Signed-off-by: Richard Henderson <rth@twiddle.net>
Commit af3cbfbe80 hoisted some "common"
loads of the temporary type, forgetting that the types could differ
during truncating moves. This affects the correctness of the memory
offset on big-endian hosts.
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The INDEX_op_call case has just been obsoleted; the mov and movi
cases have not been reachable for years. Attempt to document this
both in each tcg_out_op switch, and via TCG_OPF_NOT_PRESENT.
Because of the TCG_OPF_NOT_PRESENT change, this must be done for
all targets in a single commit.
Signed-off-by: Richard Henderson <rth@twiddle.net>
The move opcodes are special in that their constraints must cover
all available registers. So instead of checking the constraints,
just use the available registers.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Avoid allocating a tcg temporary to hold the constant address,
and instead place it directly into the op_call arguments.
At the same time, convert to the newly introduced tcg_out_call
backend function, rather than invoking tcg_out_op for the call.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Now that all backends do define TCG_TARGET_INSN_UNIT_SIZE,
remove the fallback definition.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
And use tcg pointer differencing functions as appropriate.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
And use tcg pointer differencing functions as appropriate.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Using a 16-byte aligned structure achieves best results, both for code
cleanliness and compiled code size. However, this means that we can't
use the trick of encoding the slot number into the low 2 bits.
Thankfully, we only ever use slot2, so make that explicit in the names
of the relocation functions, and drop the code for other slots.
Signed-off-by: Richard Henderson <rth@twiddle.net>
And use tcg pointer differencing functions as appropriate.
Acked-by: Claudio Fontana <claudio.fontana@huawei.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
And use tcg pointer differencing functions as appropriate.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
And use tcg pointer differencing functions as appropriate.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
And use tcg pointer differencing functions as appropriate.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
And use tcg pointer differencing functions as appropriate.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
And use tcg pointer differencing functions as appropriate.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
To be defined by the tcg backend based on the elemental unit of the ISA.
During the transition, allow TCG_TARGET_INSN_UNIT_SIZE to be undefined,
which allows us to default tcg_insn_unit to the current uint8_t.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
To avoid C undefined behaviour when patching generated code,
provide wrappers tcg_patch8/16/32/64 which use the usual memcpy
trick, and use them in the i386 backend.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Avoid stores to unaligned addresses in TCG code generation, by using the
usual memcpy() approach. (Using bswap.h would drag a lot of QEMU baggage
into TCG, so it's simpler just to do direct memcpy() here.)
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Use TCGReg everywhere appropriate. Use int32_t for all arguments
that may be registers or immediate constants. Merge tcg_out_addi
into its only caller.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Use sextract instead of raw bit shifting for the tests. Introduce
a new check_fit_ptr macro to make it clear we're looking at pointers.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Using the 32-bit SMUL is a tad more efficient than
resorting to extending and using the 64-bit MULX.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Quite a lot of effort was spent composing and decomposing 64-bit
quantities in registers, when we should just create them and leave
them as one 64-bit register.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Replace with SPARC64 define. Soon even sparcv8plus will use
64-bit register as far as TCG is concerned.
Signed-off-by: Richard Henderson <rth@twiddle.net>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJTVthUAAoJEK0ScMxN0Ceb5OUH/0Ri4engOQim3mV1jAOr2Vca
3zg+Hl8b+UioXD0se24Iipr6s+02G1DApbbLPX7DZAnoh9jEBvDtHOdde3pNbMkQ
jcTpShTyT7OKSsklRN19ckvk0ffBch5W3Ekkw6/Hg6ys2HIvirRpEL6R58oJNlP6
xcCkQZISZVkakbv5xft8YQo1v8wnU5q2l85OaC1aaDB6g+Y6ZgoA1qkWjqlHkmQk
1asflfbC0r5ke+yx7vz6310f5xBDLSVv17dqsDUr70o1m/6bem6wQXMczwmYUfk5
99OCPiqdiCZLJyVFvIvfwSakL9Bq/nvnmywXTkrB7rovk5VZz3gc4sJkuUkL+KM=
=URV1
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/rth/tags/tcg-next-20140422' into staging
Pull tcg 2014-04-22
# gpg: Signature made Tue 22 Apr 2014 22:00:04 BST using RSA key ID 4DD0279B
# gpg: Can't check signature: public key not found
* remotes/rth/tags/tcg-next-20140422:
tcg: Use HOST_WORDS_BIGENDIAN
tcg: Fix fallback from muls2_i64 to mulu2_i64
tcg: Use tcg_gen_mulu2_i32 in tcg_gen_muls2_i32
tcg: Relax requirement for mulu2_i32 on 32-bit hosts
tcg-s390: Remove W constraint
tcg-sparc: Use the type parameter to tcg_target_const_match
tcg-ppc64: Use the type parameter to tcg_target_const_match
tcg-aarch64: Remove w constraint
tcg: Add TCGType parameter to tcg_target_const_match
tcg: Fix out of range shift in deposit optimizations
tci: Mask shift counts to avoid undefined behavior
tcg: Mask shift quantities while folding
tcg: Use "unspecified behavior" for shifts
tcg: Fix warning (1 bit signed bitfield entry) and replace int by bool
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Brown Bag sez, don't put the fallback code into the wrong function.
Also, check for muluh_i64 and use tcg_gen_mulu2_i64 instead of raw ops.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Instead require either mulu2_i32 or muluh_i32. The code in tcg-op.h
already supports looking for both. Previous incomplete conversion?
Signed-off-by: Richard Henderson <rth@twiddle.net>
Most 64-bit targets need to be able to ignore the high bits
of a TCG_TYPE_I32 value.
Suggested-by: Stuart Brady <sdb@zubnet.me.uk>
Signed-off-by: Richard Henderson <rth@twiddle.net>
By inspection, for a deposit(x, y, 0, 64), we'd have a shift of (1<<64)
and everything else falls apart. But we can reuse the existing deposit
logic to get this right.
Signed-off-by: Richard Henderson <rth@twiddle.net>
The TCG result would be undefined, but we can at least produce one
plausible result and avoid triggering the wrath of analysis tools.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Change the definition such that shifts are not allowed to crash
for any input.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Static code analyzers complain about signed bitfields with only a single
bit. is_ld is used as a boolean value, so make it bool.
ppc64 already used bool for the 2nd argument is_ld of the local function
add_qemu_ldst_label. Modify all other TCG targets to do follow this
example.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Still inline, but updated to the new routines. Always use the LE
helpers, reusing the bswap between the fast and slot paths.
Signed-off-by: Richard Henderson <rth@twiddle.net>
This sequencing requires 5 stop bits instead of 6, and has room left
over to pre-load the tlb addend, and bswap data prior to being stored.
Signed-off-by: Richard Henderson <rth@twiddle.net>
The assembler seems to prefer them, perhaps we should too.
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Replace aarch64_ldst_op_data with AArch64LdstType, as it wasn't encoded
for the proper shift for the field and was confusing.
Merge aarch64_ldst_op_data, AArch64LdstType, and a few stray opcode bits
into a single I3312_* argument, eliminating some magic numbers from the
helper functions.
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Cleaning up the implementation of REV and REV16 at the same time.
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Making the bswap conditional on the memop instead of a compile-time test.
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
In some cases, a direct branch will be in range.
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Some guest env are small enough to reach the tlb with only a 12-bit addition.
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Combines 4 other inline functions and tidies the prologue.
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
It's obviously call-clobbered, but is otherwise unused.
Repurpose it as the TCG temporary.
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
A compare and branch against zero happens at the start of
every single TB.
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Rearrange code to put the compare and branch in the same place.
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The subset of logical immediates that we support is quite quick to test,
and such constants are quite common to want to load.
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
When profitable, initialize the register with MOVN instead of MOVZ,
before setting the remaining lanes with MOVK.
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Rather than raw constants that could mean anything.
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The arm ldrd/strd insns must cause alignment traps, whereas
at least for armv7 ldr/str must handle unaligned operations.
While this is hardly the only problem facing user-only emu,
this solves one problem for i386 on armv7 emulation.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reported-by: Huw Davies <huw@codeweavers.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
All of the helpers with the explicit big/little endian option
require the return address as a parameter. Acquire this via
a trampoline.
Move the load of areg0 into the trampoline.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Pass address registers explicitly, rather than as indicies of args[].
It's two argument registers either way. Use more TCGReg as appropriate.
Signed-off-by: Richard Henderson <rth@twiddle.net>
We were computing the full address into %o0 and then not using it.
Adjust some of the computation to rely less on having to pull immediate
values into registers.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Cleaning up the implementation of tcg_out_movi at the same time.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
Clean up multiply at the same time.
For remainder, generic code will produce mul+sub,
whereas we can implement with msub.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
Also tidy the implementation of ubfm, sbfm, extr in order to share code.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
Handle a simplified set of logical immediates for the moment.
The way gcc and binutils do it, with 52k worth of tables, and
a binary search depth of log2(5334) = 13, seems slow for the
most common cases.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
Avoid the magic numbers in the current implementation.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
This merges the implementation of tcg_out_addi and tcg_out_subi.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
Converting the add/sub (3.5.2) and logical shifted (3.5.10) instruction
groups to the new scheme.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
Commit 023261ef85 failed to remove a
nop that's no longer required.
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
At first glance the code appears to be using 1's compliment encoding,
a-la AArch32. Except that the constant is "off", creating a complicated
split field 2's compliment encoding.
Much clearer to just use a normal mask and shift.
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
It was unused. Let's not overcomplicate things before we need them.
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This reduces the code size of the function significantly.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
We assert that the values for _I32 and _I64 are 0 and 1 respectively.
This will make a couple of functions declared by tcg.c cleaner.
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Removed from other targets in 56bbc2f967.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Win32 doesn't have a cpuid.h, and MacOSX may have one but without
the __cpuid() function we use, which means that commit 9d2eec20
broke the build for those platforms. Fix this by tightening up
our configure cpuid.h check to test that the functions we need
are present, and adding some missing #ifdef guards in
tcg/i386/tcg-target.c.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
These three-operand shift instructions do not require the shift count
to be placed into ECX. This reduces the number of mov insns required,
with the mere addition of a new register constraint.
Don't attempt to get rid of the matching constraint, as that's impossible
to manipulate with just a new constraint. In addition, constant shifts
still need the matching constraint.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Note that the optimizer cannot simplify ANDC X,Y,C to AND X,Y,~C
so we must handle constants in the implementation of andc.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
These are not needed by users of tcg-target.h. No need to recompile
when we adjust them.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Recognize 0 operand to andc, and -1 operands to and, orc, eqv.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Like we already do for SUB and XOR.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Given, of course, an appropriate constant. These could be generated
from the "canonical" operation for inversion on the guest, or via
other optimizations.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The shl_i32 op might set some bits of the unused 32 high bits of the
mask. Fix that by clearing the unused 32 high bits for all 32-bit ops
except load/store which operate on tl values.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Known-zero bits optimization is a great idea that helps to generate more
optimized code. However the current implementation only works in very few
cases as the computed mask is not saved.
Fix this to make it really working.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
32-bit versions of sar and shr ops should not propagate known-zero bits
from the unused 32 high bits. For sar it could even lead to wrong code
being generated.
Cc: qemu-stable@nongnu.org
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
It's this that should be subtracted from 0x20 when converting to a right rotate.
Cc: qemu-stable@nongnu.org
Signed-off-by: Huw Davies <huw@codeweavers.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The second half register of a 64-bit temp on a 32-bit host
was allocated with the wrong base_type.
The base_type of the second half register is never checked,
but for consistency it should be the same as the first half.
Signed-off-by: Richard Henderson <rth@twiddle.net>
We have macros for marking TCGv values as unused, checking if they
are unused and comparing them to each other. However these only exist
for TCGv_i32 and TCGv_i64; add them for TCGv_ptr as well.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Commit c9baa30f42 failed to
delete all of the relevant code, leading to Werrors about
unused symbols.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
We have cache pools of temporaries that we can reuse later when they've
already been allocated before.
These cache pools differenciate between the target TCG variable type they
contain. So we have one pool for I32 and one pool for I64 variables.
On a 32bit system, we can't work with 64bit registers though. So instead we
spawn two I32 temporaries for every I64 temporary we create. All caching
works the same way as on a real 64-bit system though: We create a cache entry
in the 64bit array for the first i32 index.
However, when we free such a temporary we free it to the pool of its type
(which is always i32 on 32bit systems) rather than its base_type (which is
i64 or i32 depending on the variable). This means we put a temporary that
is of base_type == i64 into the i32 preallocated temporary pool.
Eventually, this results in failures like this on 32bit hosts:
qemu-system-ppc64: tcg/tcg.c:515: tcg_temp_new_internal: Assertion `ts->base_type == type' failed.
This patch makes the free routine use the base_type instead for the free case,
so it's consistent with the temporary allocation. It fixes the above failure
for me.
Signed-off-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1390146811-59936-1-git-send-email-agraf@suse.de
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
TCG_TARGET_HAS_movcond_i32 is always defined to 1 in tcg-target.h, so
remove the corresponding #ifdef #endif sequence, left from a previous
refactoring.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The movbe instruction has been added on some Intel Atom CPUs and on
recent Intel Haswell CPUs. It allows to load/store a value and at the
same time bswap it.
This patch detects the avaibility of this instruction and when available
use it in the qemu load/store routines in replacement of load/store +
bswap. Note that for 16-bit unsigned loads, movbe + movzw is basically the
same as movzw + bswap, so the patch doesn't touch this case.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
[RTH: Reduced the number of conditionals using "movop".]
Signed-off-by: Richard Henderson <rth@twiddle.net>
Add support for three-byte opcodes, starting with the 0x0f 0x38 prefix.
Use P_EXT38 as the new constant, and shift all other constants so that
P_EXT and P_EXT38 have neighbouring values.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
[RTH: Changed the name from P_EXT2 to P_EXT38.]
Signed-off-by: Richard Henderson <rth@twiddle.net>
P_REXW is defined has a constant at the beginning of i386/tcg-target.c,
but the corresponding bit is later used in a harcoded way, which defeat
the purpose of a constant.
Fix that by using a conditional expression operator instead of a shift.
On x86 this actually makes the code slightly smaller as GCC does in
practice (opc >> 8) & 8 instead of (opc & 0x800) >> 8 so the constants
are smaller to load.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The comments apply to 8-bit stores, not 8-byte stores.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
We previously allocated 32-bits per temp for the next_free_temp entry.
We now allocate 4 bits per temp across the 4 bitmaps.
Using a linked list meant that if a translator is tweeked, resulting in
temps being freed in a different order, that would have follow-on effects
throughout the TB. Always allocating the lowest free temp means that
follow-on effects are minimized, which can make it easier to diff output
when debugging the translators.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
No need to set up a SIGILL signal handler for detection anymore.
Remove a ton of sanity checks that must be true, given that we're
requiring a 64-bit build (the note about 31-bit KVM is satisfied
by configuring with TCI).
Signed-off-by: Richard Henderson <rth@twiddle.net>
Allow host detection on linux systems without glibc 2.16 or later.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Allow host detection on linux systems without glibc 2.16 or later.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Being able to "extend" from 64-bits (with a mov) simplifies
a few places where the conditional breaks the train of thought.
Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
We can and/or/xor/andcm small constants, saving one cycle.
Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
We can subtract from more small constants that just 0 with one insn,
and we can add the negative for most small constants.
Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Avoids a wasted cycle loading up small constants.
Simplify the code assuming the tcg optimizer is going to work
and don't expect the first operand of the add to be constant.
Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
When performing an operation with two input registers, we'd leave
the stop bit (and thus an extra cycle) that's only needed when one
or the other input is a constant.
Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Since the move away from the global areg0, we're no longer globally
reserving areg0. Which means our use of R7 clobbers a call-saved
register. Shift areg0 into the windowed registers. Indeed, choose
the incoming parameter register that it comes to us by.
This requires moving the register holding the return address elsewhere.
Choose R33 for tidiness.
Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
There was a misconception that a stop bit is required between a compare
and the branch that uses the predicate set by the compare. This lead to
the usage of an extra bundle in which to perform the compare. The extra
bundle left room for constants to be loaded for use with the compare insn.
If we pack the compare and the branch together in the same bundle, then
there's no longer any room for non-zero constants. At which point we
can eliminate half the function by not handling them.
Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Using only indirect calls results in 3 bundles (one to load the
descriptor address), and 4 stop bits. By looking through the
descriptor to the constants, we can perform the call with 2
bundles and only 1 stop bit.
Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
There's no need to go through the full opcode-to-insn function call
to generate nops. This makes the source a bit more readable.
Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
If we pull the code to emit the actual load/store into a subroutine,
we can share the reg+reg addressing mode code between softmmu and
usermode. This lets us load GUEST_BASE into a temporary register
rather than attempting to add it piece-wise to the address.
Which lets us use movw+movt for armv7, rather than (up to) 4 adds.
Code size for pre-armv7 stays the same.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Once we form a combined qemu_st_i32 opcode, we won't be able to
have separate constraints based on size. This one is fairly easy
to work around, since eax is available as a scratch register.
When storing variable data, this tends to merely exchange one mov
for another. E.g.
-: mov %esi,%ecx
...
-: mov %cl,(%edx)
+: mov %esi,%eax
+: mov %al,(%edx)
Where we do have a regression is when storing constant data, in which
we may load the constant into edi, when only ecx/ebx ought to be used.
The proper way to recover this regression is to allow constants as
arguments to qemu_st_i32, so that we never load the constant data into
a register at all, must less the wrong register. TBD.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Pass two TCGReg to tcg_out_tlb_load, rather than idx+args.
Move ldst_optimization routines just below tcg_out_tlb_load to avoid
the need for forward declarations.
Use TCGReg enum in preference to int where apprpriate.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Step three in the transition: helpers not tied to the target
"default" endianness. To be used when the guest uses a memory
operation with non-default endianness.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Step two in the transition, adding the new ldst opcodes. Keep the old
opcodes around until all backends support the new opcodes.
Signed-off-by: Richard Henderson <rth@twiddle.net>
This is a no-op backend data implementation, for those targets that
are not currently using the load/store optimization path.
This is prepatory to always requiring these functions in all backends.
Signed-off-by: Richard Henderson <rth@twiddle.net>
A minimal update to use the new helpers with the return address argument.
Tested-by: Claudio Fontana <claudio.fontana@linaro.org>
Reviewed-by: Claudio Fontana <claudio.fontana@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
For the few targets that actually use these, we'd not report
them symbolicly in the tcg opcode logs.
Signed-off-by: Richard Henderson <rth@twiddle.net>
One call inside of a loop to tcg_register_helper instead of hundreds
of sequential calls.
Presumably more icache and branch prediction friendly; resulting binary
size mostly unchanged on x86_64, as we're trading 32-bit rip-relative
references in .text for full 64-bit pointers in .rodata.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Slightly changes the interface, in that we now return name
instead of a TCGHelperInfo structure, which goes away.
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
# By Richard Henderson
# Via Richard Henderson
* rth/tcg-arm-pull:
tcg-arm: Move the tlb addend load earlier
tcg-arm: Remove restriction on qemu_ld output register
tcg-arm: Return register containing tlb addend
tcg-arm: Move load of tlb addend into tcg_out_tlb_read
tcg-arm: Use QEMU_BUILD_BUG_ON to verify constraints on tlb
tcg-arm: Use strd for tcg_out_arg_reg64
tcg-arm: Rearrange slow-path qemu_ld/st
tcg-arm: Use ldrd/strd for appropriate qemu_ld/st64
Message-id: 1380663109-14434-1-git-send-email-rth@twiddle.net
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
# By Stefan Weil
# Via Stefan Weil
* sweil/tci:
misc: Use new rotate functions
bitops: Add rotate functions (rol8, ror8, ...)
tci: Add implementation of rotl_i64, rotr_i64
Message-id: 1380137693-3729-1-git-send-email-sw@weilnetz.de
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
There are free scheduling slots between the sequence of
comparison instructions. This requires changing the
register in use to avoid conflict with those compares.
Signed-off-by: Richard Henderson <rth@twiddle.net>
The main intent of the patch is to allow the tlb addend register
to be changed, without tying that change to the constraint. But
the most common side-effect seems to be to enable usage of ldrd
with the r0,r1 pair.
Signed-off-by: Richard Henderson <rth@twiddle.net>
This allows us to make more intelligent decisions about the relative
offsets of the tlb comparator and the addend, avoiding any need of
writeback addressing.
Signed-off-by: Richard Henderson <rth@twiddle.net>
One of the two constraints we already checked via #if, but
the tlb offset distance was only checked at runtime.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Use the new helper_ret_*_mmu routines. Use a conditional call
to arrange for a tail-call from the store path, and to load the
return address for the helper for the load path.
Signed-off-by: Richard Henderson <rth@twiddle.net>
It is used by qemu-ppc64 when running Debian's busybox-static.
Cc: qemu-stable <qemu-stable@nongnu.org>
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Less conditional compilation. Merge an add insn with the indexed
memory load insn. Load the tlb addend earlier. Avoid the address
update memory form.
Fix a bug in not allowing large enough tlb offsets for some guests.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Previously we'd only handle 16-bit offsets from memory operand without falling
back to indexed, but it's easy to use ADDIS to handle full 32-bit offsets.
This also lets us unify code that existed inline in tcg_out_op for handling
addition of large constants.
The new R2 temporary was marked reserved for the AIX calling convention, but
the register really is call-clobbered and since tcg generated code has no use
for a TOC, it's available for use.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Remove conditionalization from tcg_target_reg_alloc_order, relying on
reserved_regs to prevent register allocation that shouldn't happen.
So R11 is now present in reg_alloc_order for __APPLE__, but also now
reserved.
Sort reg_alloc_order into call-saved, call-clobbered, and parameters.
This reduces the effect of values getting spilled and reloaded before
function calls.
Whether or not it is reserved, R2 (TOC) is always call-clobbered.
Signed-off-by: Richard Henderson <rth@twiddle.net>
While these are rare from code that's been through the optimizer,
it's not uncommon within the tcg backend.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Instead of bare N, for clarity. The only (intentional) exception made
is for insns that encode R|0, i.e. when R0 encoded into the insn is
interpreted as zero not the contents of the register.
Signed-off-by: Richard Henderson <rth@twiddle.net>
The fix is that sparc has so many mmu modes that the last one overflowed
the 16-bit signed offset we assumed would fit. Handle this, and check
the new assumption at compile time.
Load the tlb addend earlier for the fast path.
Remove the explicit address + addend and make use of index addressing.
Adjust constraints for qemu_ld64 such that we don't clobber the address
register or tlb addend before loading both values.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Coding style fixes. Use TCGReg enumeration values instead of raw
numbers. Don't needlessly pull the whole TCGLabelQemuLdst struct
into local variables. Less conditional compilation.
No functional changes.
Signed-off-by: Richard Henderson <rth@twiddle.net>
While these are rare from code that's been through the optimizer,
it's not uncommon within the tcg backend.
Signed-off-by: Richard Henderson <rth@twiddle.net>
These use a 32-bit load-of-immediate to save a mflr+addi+mtlr sequence.
Tested with a Windows 98 guest (pretty much the most recent thing I
could run on my PPC machine) and kvm-unit-tests's sieve.flat. The
speed up for sieve.flat is as high as 10% for qemu-system-i386, 25%
(no kidding) for qemu-system-x86_64 on my PowerBook G4.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
For the AIX ABI, the function pointer and small area pointer need
to be loaded in the trampoline. The trampoline instead is called
with a normal BL instruction.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
error: suggest parentheses around comparison in operand of ‘&’ [-Werror=parentheses]
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>