Commit Graph

859 Commits

Author SHA1 Message Date
Richard Henderson
d7156f7ce4 tcg: Add 64-bit multiword arithmetic operations
Matching the 32-bit multiword arithmetic that we already have.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-02-23 17:25:28 +00:00
Richard Henderson
803d805bce tcg-sparc: Always implement 32-bit multiword ops
Cc: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-02-23 17:25:28 +00:00
Richard Henderson
bbc863bfec tcg-i386: Always implement 32-bit multiword ops
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-02-23 17:25:28 +00:00
Richard Henderson
e6a7273454 tcg: Make 32-bit multiword operations optional for 64-bit hosts
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-02-23 17:25:28 +00:00
Andreas Färber
be96bd3fbf tcg/ppc: Fix build of tcg_qemu_tb_exec()
Commit 0b0d3320db (TCG: Final globals
clean-up) moved code_gen_prologue but forgot to update ppc code.
This broke the build on 32-bit ppc. ppc64 is unaffected.

Cc: Evgeny Voevodin <evgenyvoevodin@gmail.com>
Cc: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Andreas Färber <andreas.faerber@web.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-02-17 14:27:36 +00:00
Peter Maydell
24537a0191 qemu-log: Rename the public-facing cpu_set_log function to qemu_set_log
Rename the public-facing function cpu_set_log to qemu_set_log. This
requires us to rename the internal-only qemu_set_log() to
do_qemu_set_log().

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-02-16 10:44:44 +00:00
Evgeny Voevodin
5e5f07e08f TCG: Move translation block variables to new context inside tcg_ctx: tb_ctx
It's worth to clean-up translation blocks variables and move them
into one context as was suggested by Swirl.
Also if we use this context directly inside tcg_ctx, then it
speeds up code generation a bit.

Signed-off-by: Evgeny Voevodin <evgenyvoevodin@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-02-16 10:41:16 +00:00
Evgeny Voevodin
0b0d3320db TCG: Final globals clean-up
Signed-off-by: Evgeny Voevodin <evgenyvoevodin@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-02-16 10:40:56 +00:00
Peter Maydell
5256a7208a tcg/target-arm: Add missing parens to assertions
Silence a (legitimate) complaint about missing parentheses:

tcg/arm/tcg-target.c: In function ‘tcg_out_qemu_ld’:
tcg/arm/tcg-target.c:1148:5: error: suggest parentheses around
comparison in operand of ‘&’ [-Werror=parentheses]
tcg/arm/tcg-target.c: In function ‘tcg_out_qemu_st’:
tcg/arm/tcg-target.c:1357:5: error: suggest parentheses around
comparison in operand of ‘&’ [-Werror=parentheses]

which meant that we would mistakenly always assert if running
a QEMU built with debug enabled on ARM.

Signed-off-by: Peter Maydell <peter.maydelL@linaro.org>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-01-19 10:27:45 +00:00
Paolo Bonzini
633f650254 optimize: optimize using nonzero bits
This adds two optimizations using the non-zero bit mask.  In some cases
involving shifts or ANDs the value can become zero, and can thus be
optimized to a move of zero.  Second, useless zero-extension or an
AND with constant can be detected that would only zero bits that are
already zero.

The main advantage of this optimization is that it turns zero-extensions
into moves, thus enabling much better copy propagation (around 1% code
reduction).  Here is for example a "test $0xff0000,%ecx + je" before
optimization:

 mov_i64 tmp0,rcx
 movi_i64 tmp1,$0xff0000
 discard cc_src
 and_i64 cc_dst,tmp0,tmp1
 movi_i32 cc_op,$0x1c
 ext32u_i64 tmp0,cc_dst
 movi_i64 tmp12,$0x0
 brcond_i64 tmp0,tmp12,eq,$0x0

and after (without patch on the left, with on the right):

 movi_i64 tmp1,$0xff0000                 movi_i64 tmp1,$0xff0000
 discard cc_src                          discard cc_src
 and_i64 cc_dst,rcx,tmp1                 and_i64 cc_dst,rcx,tmp1
 movi_i32 cc_op,$0x1c                    movi_i32 cc_op,$0x1c
 ext32u_i64 tmp0,cc_dst
 movi_i64 tmp12,$0x0                     movi_i64 tmp12,$0x0
 brcond_i64 tmp0,tmp12,eq,$0x0           brcond_i64 cc_dst,tmp12,eq,$0x0

Other similar cases: "test %eax, %eax + jne" where eax is already 32-bit
(after optimization, without patch on the left, with on the right):

 discard cc_src                          discard cc_src
 mov_i64 cc_dst,rax                      mov_i64 cc_dst,rax
 movi_i32 cc_op,$0x1c                    movi_i32 cc_op,$0x1c
 ext32u_i64 tmp0,cc_dst
 movi_i64 tmp12,$0x0                     movi_i64 tmp12,$0x0
 brcond_i64 tmp0,tmp12,ne,$0x0           brcond_i64 rax,tmp12,ne,$0x0

"test $0x1, %dl + je":

 movi_i64 tmp1,$0x1                      movi_i64 tmp1,$0x1
 discard cc_src                          discard cc_src
 and_i64 cc_dst,rdx,tmp1                 and_i64 cc_dst,rdx,tmp1
 movi_i32 cc_op,$0x1a                    movi_i32 cc_op,$0x1a
 ext8u_i64 tmp0,cc_dst
 movi_i64 tmp12,$0x0                     movi_i64 tmp12,$0x0
 brcond_i64 tmp0,tmp12,eq,$0x0           brcond_i64 cc_dst,tmp12,eq,$0x0

In some cases TCG even outsmarts GCC. :)  Here the input code has
"and $0x2,%eax + movslq %eax,%rbx + test %rbx, %rbx" and the optimizer,
thanks to copy propagation, does the following:

 movi_i64 tmp12,$0x2                     movi_i64 tmp12,$0x2
 and_i64 rax,rax,tmp12                   and_i64 rax,rax,tmp12
 mov_i64 cc_dst,rax                      mov_i64 cc_dst,rax
 ext32s_i64 tmp0,rax                  -> nop
 mov_i64 rbx,tmp0                     -> mov_i64 rbx,cc_dst
 and_i64 cc_dst,rbx,rbx               -> nop

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-01-19 10:13:16 +00:00
Paolo Bonzini
3a9d8b179b optimize: track nonzero bits of registers
Add a "mask" field to the tcg_temp_info struct.  A bit that is zero
in "mask" will always be zero in the corresponding temporary.
Zero bits in the mask can be produced from moves of immediates,
zero-extensions, ANDs with constants, shifts; they can then be
be propagated by logical operations, shifts, sign-extensions,
negations, deposit operations, and conditional moves.  Other
operations will just reset the mask to all-ones, i.e. unknown.

[rth: s/target_ulong/tcg_target_ulong/]

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-01-19 10:13:14 +00:00
Paolo Bonzini
d193a14a2c optimize: only write to state when clearing optimizer data
The next patch will add to the TCG optimizer a field that should be
non-zero in the default case.  Thus, replace the memset of the
temps array with a loop.  Only the state field has to be up-to-date,
because others are not used except if the state is TCG_TEMP_COPY
or TCG_TEMP_CONST.

[rth: Extracted the loop to a function.]

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-01-19 10:13:13 +00:00
Paolo Bonzini
163fa4b09d tcg-i386: use LEA for 3-operand 64-bit addition
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-01-12 12:45:56 +00:00
Stefan Weil
9a8a5ae69d tcg: Remove unneeded assertion
Commit 7f6f0ae5b9 added two assertions.

One of these assertions is not needed:
The pointer ts is never NULL because it is initialized with the
address of an array element.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2013-01-02 11:23:21 -06:00
Richard Henderson
753d99d38b tcg-hppa: Fix typo in brcond2
Reported-by: Stuart Brady <sdb@zubnet.me.uk>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-12-29 12:21:53 +00:00
Richard Henderson
76a347e1cd tcg-i386: Perform cmov detection at runtime for 32-bit.
Existing compile-time detection is spotty at best.  Convert
it all to runtime detection instead.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-12-29 12:21:16 +00:00
Richard Henderson
afcb92beac tcg: Add TCGV_IS_UNUSED_*
Cc: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-12-29 12:14:07 +00:00
Paolo Bonzini
1de7afc984 misc: move include files to include/qemu/
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2012-12-19 08:32:39 +01:00
Paolo Bonzini
022c62cbbc exec: move include files to include/exec/
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2012-12-19 08:31:31 +01:00
Paolo Bonzini
cb9c377f54 janitor: add guards to headers
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2012-12-19 08:31:31 +01:00
Anthony Liguori
7c12fd9b29 Merge remote-tracking branch 'stefanha/trivial-patches' into staging
* stefanha/trivial-patches:
  pc_sysfw: Plug memory leak on pc_fw_add_pflash_drv() error path
  qemu-options: Fix space at EOL
  Fix spelling in comments and documentation
  Clean up pci_drive_hot_add()'s use of BlockInterfaceType
  arm: a9mpcore: remove un-used ptimer_iomem field
  target-sparc: Remove t0, t1 from CPUSPARCState
  target-m68k: Remove t1 from CPUM68KState
  target-alpha: Remove t0, t1 from CPUAlphaState
  s390x: Spelling fixes (endianess -> endianness, occured -> occurred)
  Fix comments (adress -> address, layed -> laid, wierd -> weird)
  Fix spelling (prefered -> preferred)
  configure: Remove stray debug output
  sd: Send debug printfery to stderr not stdout

Conflicts:
	configure

Resolve spelling conflict in configure.

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2012-12-10 08:34:29 -06:00
Evgeny Voevodin
c3a43607d9 tcg/tcg.h: Duplicate global TCG gen_opc_ arrays into TCGContext.
Signed-off-by: Evgeny Voevodin <e.voevodin@samsung.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-12-08 14:24:41 +00:00
Stefan Weil
a93cf9dfba Fix comments (adress -> address, layed -> laid, wierd -> weird)
Remove also a duplicated 'the'.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2012-12-07 12:34:11 +01:00
Aurelien Jarno
e5138db510 tcg: mark local temps as MEM in dead_temp()
In dead_temp, local temps should always be marked as back to memory,
even if they have not been allocated (i.e. they are discared before
cross a basic block).

It fixes the following assertion in target-xtensa:

    qemu-system-xtensa: tcg/tcg.c:1665: temp_save: Assertion `s->temps[temp].val_type == 2 || s->temps[temp].fixed_reg' failed.
    Aborted

Reported-by: Max Filippov <jcmvbkbc@gmail.com>
Tested-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-11-24 13:24:13 +01:00
Aurelien Jarno
7aab08aa78 tcg/arm: fix cross-endian qemu_st16
The bswap16 TCG opcode assumes that the high bytes of the temp equal
to 0 before calling it. The ARM backend implementation takes this
assumption to slightly optimize the generated code.

The same implementation is called for implementing the cross-endian
qemu_st16 opcode, where this assumption is not true anymore. One way to
fix that would be to zero the high bytes before calling it. Given the
store instruction just ignore them, it is possible to provide a slightly
more optimized version. With ARMv6+ the rev16 instruction does the work
correctly. For lower ARM versions the patch provides a version which
behaves correctly with non-zero high bytes, but fill them with junk.

Cc: Andrzej Zaborowski <balrogg@gmail.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-stable@nongnu.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-11-24 13:19:53 +01:00
Aurelien Jarno
d17bd1d8cc tcg/arm: fix TLB access in qemu-ld/st ops
The TCG arm backend considers likely that the offset to the TLB
entries does not exceed 12 bits for mem_index = 0. In practice this is
not true for at least the MIPS target.

The current patch fixes that by loading the bits 23-12 with a separate
instruction, and using loads with address writeback, independently of
the value of mem_idx. In total this allow a 24-bit offset, which is a
lot more than needed.

Cc: Andrzej Zaborowski <balrogg@gmail.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-stable@nongnu.org
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-11-24 13:19:53 +01:00
malc
ecf51c9abe tcg/ppc: Fix !softmmu case
Signed-off-by: malc <av1474@comtv.ru>
2012-11-21 10:56:22 +04:00
malc
ecdffbccd7 tcg/ppc: Remove unused s_bits variable
Thanks to Alexander Graf for heads up.

Signed-off-by: malc <av1474@comtv.ru>
2012-11-19 22:22:24 +04:00
Stefan Weil
e24dc9feb0 tci: Support deposit operations
The operations for INDEX_op_deposit_i32 and INDEX_op_deposit_i64
are now supported and enabled by default.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-11-18 20:40:08 +00:00
Evgeny Voevodin
83eeb39669 TCG: Remove unused global variables
Signed-off-by: Evgeny Voevodin <e.voevodin@samsung.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-11-17 13:53:38 +00:00
Evgeny Voevodin
1ff0a2c594 TCG: Use gen_opparam_buf from context instead of global variable.
Signed-off-by: Evgeny Voevodin <e.voevodin@samsung.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-11-17 13:53:37 +00:00
Evgeny Voevodin
92414b31e7 TCG: Use gen_opc_buf from context instead of global variable.
Signed-off-by: Evgeny Voevodin <e.voevodin@samsung.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-11-17 13:53:36 +00:00
Evgeny Voevodin
c4afe5c4d3 TCG: Use gen_opparam_ptr from context instead of global variable.
Signed-off-by: Evgeny Voevodin <e.voevodin@samsung.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-11-17 13:53:34 +00:00
Evgeny Voevodin
efd7f48600 TCG: Use gen_opc_ptr from context instead of global variable.
Signed-off-by: Evgeny Voevodin <e.voevodin@samsung.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-11-17 13:53:27 +00:00
Evgeny Voevodin
8232a46a16 tcg/tcg.h: Duplicate global TCG variables in TCGContext
Signed-off-by: Evgeny Voevodin <e.voevodin@samsung.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-11-17 13:53:26 +00:00
Kirill Batuzov
3c5645fab3 tcg: properly check that op's output needs to be synced to memory
Fix typo introduced in b3a1be87ba.

Reported-by: Ruslan Savchenko <ruslan.savchenko@gmail.com>
Signed-off-by: Kirill Batuzov <batuzovk@ispras.ru>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-11-11 16:06:46 +01:00
malc
c878da3b27 tcg/ppc32: Use trampolines to trim the code size for mmu slow path accessors
mmu access looks something like:

<check tlb>
if miss goto slow_path
<fast path>
done:
...

; end of the TB
slow_path:
 <pre process>
 mr r3, r27         ; move areg0 to r3
                    ; (r3 holds the first argument for all the PPC32 ABIs)
 <call mmu_helper>
 b $+8
 .long done
 <post process>
 b done

On ppc32 <call mmu_helper> is:

(SysV and Darwin)

mmu_helper is most likely not within direct branching distance from
the call site, necessitating

a. moving 32 bit offset of mmu_helper into a GPR ; 8 bytes
b. moving GPR to CTR/LR                          ; 4 bytes
c. (finally) branching to CTR/LR                 ; 4 bytes

r3 setting              - 4 bytes
call                    - 16 bytes
dummy jump over retaddr - 4 bytes
embedded retaddr        - 4 bytes
         Total overhead - 28 bytes

(PowerOpen (AIX))
a. moving 32 bit offset of mmu_helper's TOC into a GPR1 ; 8 bytes
b. loading 32 bit function pointer into GPR2            ; 4 bytes
c. moving GPR2 to CTR/LR                                ; 4 bytes
d. loading 32 bit small area pointer into R2            ; 4 bytes
e. (finally) branching to CTR/LR                        ; 4 bytes

r3 setting              - 4 bytes
call                    - 24 bytes
dummy jump over retaddr - 4 bytes
embedded retaddr        - 4 bytes
         Total overhead - 36 bytes

Following is done to trim the code size of slow path sections:

In tcg_target_qemu_prologue trampolines are emitted that look like this:

trampoline:
mfspr r3, LR
addi  r3, 4
mtspr LR, r3      ; fixup LR to point over embedded retaddr
mr    r3, r27
<jump mmu_helper> ; tail call of sorts

And slow path becomes:

slow_path:
 <pre process>
 <call trampoline>
 .long done
 <post process>
 b done

call                    - 4 bytes (trampoline is within code gen buffer
                                   and most likely accessible via
                                   direct branch)
embedded retaddr        - 4 bytes
         Total overhead - 8 bytes

In the end the icache pressure is decreased by 20/28 bytes at the cost
of an extra jump to trampoline and adjusting LR (to skip over embedded
retaddr) once inside.

Signed-off-by: malc <av1474@comtv.ru>
2012-11-06 04:37:57 +04:00
malc
ed224a56b3 tcg/ppc: ld/st optimization
Signed-off-by: malc <av1474@comtv.ru>
2012-11-03 20:17:54 +04:00
Yeongkyoon Lee
b76f0d8c2e tcg: Optimize qemu_ld/st by generating slow paths at the end of a block
Add optimized TCG qemu_ld/st generation which locates the code of TLB miss
cases at the end of a block after generating the other IRs.
Currently, this optimization supports only i386 and x86_64 hosts.

Signed-off-by: Yeongkyoon Lee <yeongkyoon.lee@samsung.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-11-03 09:44:21 +00:00
Aurelien Jarno
b3a1be87ba tcg: don't remove op if output needs to be synced to memory
Commit 9c43b68de6 do not correctly check
for dead outputs when they need to be synced to memory in case of
half-dead operations.

Fix that by applying the same pattern than for the default case.

Tested-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-31 22:20:45 +01:00
Aurelien Jarno
3585317f6f tcg/mips: use MUL instead of MULT on MIPS32 and above
MIPS32 and later instruction sets have a multiplication instruction
directly operating on GPRs. It only produces a 32-bit result but
it is exactly what is needed by QEMU.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-30 00:34:48 +01:00
Richard Henderson
44b37ace06 tcg-i386: Use %gs prefixes for x86_64 GUEST_BASE
When we allocate a reserved_va for the guest, the kernel will likely
choose an address well above 4G.  At which point we must use a pair
of movabsq+addq to form the host address.  If we have OS support,
set up a segment register to point to guest_base instead.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-28 14:54:25 +01:00
Aurelien Jarno
b393ab4228 tcg: remove compatiblity call flags
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-28 14:54:24 +01:00
Aurelien Jarno
7850527966 tcg: rework TCG helper flags
The current helper flags, TCG_CALL_CONST and TCG_CALL_PURE might be
confusing and doesn't provide enough granularity for some helpers (FP
helpers for example).

This patch changes them into the following helpers flags:
- TCG_CALL_NO_READ_GLOBALS means that the helper does not read globals,
  either directly or via an exception. They will not be saved to their
  canonical location before calling the helper.
- TCG_CALL_NO_WRITE_GLOBALS means that the helper does not modify any
  globals. They will only be saved to their canonical locations before
  calling helpers, but they won't be reloaded afterwise.
- TCG_CALL_NO_SIDE_EFFECTS means that the call to the function is
  removed if the return value is not used.

It provides convenience flags, to avoid helper definitions longer than
80 characters. It also provides compatibility flags, and updates the
documentation.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-28 14:54:23 +01:00
Aurelien Jarno
3d5c5f876d tcg: synchronize globals for ops with side effects
Operations with side effects (in practice qemu_ld/st ops), only need to
synchronize globals to make sure the CPU state is consistent in case of
exception.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-28 14:54:22 +01:00
Aurelien Jarno
b202d41ee7 tcg: forbid ld/st function to modify globals
Mapping a memory address using a global and accessing it through
ld/st operations is currently broken. As it doesn't make any sense
to do that performance wise, let's forbid that.

Update the TCG documentation, and remove partial support for that.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-28 14:54:22 +01:00
Aurelien Jarno
344028ba0f tcg: fix some op flags
Some branch related ops are marked with TCG_OPF_SIDE_EFFECTS, some other
not. In practice they don't need to, as they are all marked with
TCG_OPF_BB_END, which is handled specifically in all the code.

The call op is marked as TCG_OPF_SIDE_EFFECTS, which might be not true
as there is are specific flags (TCG_CALL_CONST and TCG_CALL_PURE) for
specifying that. On the other hand it always clobber arguments, so mark
it as such even if the call op is handled in a different code path.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-28 14:54:22 +01:00
Aurelien Jarno
2c0366f036 tcg: don't explicitly save globals and temps
The liveness analysis ensures that globals and temps are at the correct
state at a basic block end or with an op with side effects. Avoid
looping on all temps, this can be time consuming on targets with a lot
of globals. Keep an assert in debug mode.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-28 14:54:22 +01:00
Aurelien Jarno
7dfd8c6aa1 tcg: start with local temps in TEMP_VAL_MEM state
Start with local temps in TEMP_VAL_MEM state, to make possible a later
check that all the temps are correctly saved back to memory.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-28 14:54:22 +01:00
Aurelien Jarno
a52ad07e7c tcg: always mark dead input arguments as dead
Always mark dead input arguments as dead, even if the op is at the basic
block end. This will allow to check that all temps are correctly saved.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-28 14:54:22 +01:00
Aurelien Jarno
c29c1d7edf tcg: rewrite tcg_reg_alloc_mov()
Now that the liveness analysis provides more information, rewrite
tcg_reg_alloc_mov(). This changes the behaviour about propagating
constants and memory accesses. We now take the assumption that once
a value is loaded into a register (from memory or from a constant),
it's better to keep it there than to reload it later. This assumption
is now always almost correct given that we are now sure the
corresponding temp is going to be used later (otherwise it would have
been synchronized and marked as dead already). The assumption is wrong
if one of the op after clobbers some registers including the one
of the holding the temp (this can be avoided by allocating clobbered
registers last, which is what most TCG target do), or in case of lack
of available register.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-28 14:54:22 +01:00
Aurelien Jarno
4c4e1ab26b tcg: improve tcg_reg_alloc_movi()
Now that the liveness analysis might mark some output temps as dead, call
temp_dead() if needed.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-28 14:54:21 +01:00
Aurelien Jarno
9c43b68de6 tcg: rework liveness analysis
Rework the liveness analysis by tracking temps that need to go back to
memory in addition to dead temps tracking. This allows to mark output
arguments as "need sync", and to synchronize them back to memory as soon
as they are not written anymore. This way even arguments mapping to
globals can be marked as "dead", avoiding moves to a new register when
input and outputs are aliased.

In addition it means that registers are freed as soon as temps are not
used anymore, instead of waiting for a basic block end or an op with side
effects. This reduces register spilling especially on CPUs with few
registers, and spread the mov over all the TB, increasing the
performances on in-order CPUs.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-28 14:54:21 +01:00
Aurelien Jarno
ec7a869d31 tcg: sync output arguments on liveness request
Synchronize an output argument when requested by the liveness analysis.
This is needed so that the temp can be declared dead later.

For that, add a new op_sync_args table in which each bit tells if the
corresponding output argument needs to be synchronized with the memory.
Pass it to the tcg_reg_alloc_* functions, and honor this bit. We need to
synchronize the argument before marking it as dead, and we have to make
sure all the infos about the temp are correctly filled.

At the same time change some types from unsigned int to uint16_t when
passing op_dead_args.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-28 14:54:21 +01:00
Aurelien Jarno
1ad80729be tcg: add temp_sync()
Add a new function temp_sync() to synchronize the canonical location
of a temp with the value in the corresponding register, but without
freeing the associated register. Rewrite temp_save() to call
temp_sync() followed by temp_dead().

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-28 14:54:21 +01:00
Aurelien Jarno
7f6ceedf9c tcg: add tcg_reg_sync()
Add a new function tcg_reg_sync() to synchronize the canonical location
of a temp with the value in the associated register, but without freeing
it. Rewrite tcg_reg_free() to first call tcg_reg_sync() and then to free
the register.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-28 14:54:21 +01:00
Aurelien Jarno
639368dd68 tcg: add temp_dead()
A lot of code is duplicated to mark a temporary as dead. Replace it
by temp_dead(), which in addition marks the temp as saved in memory
for globals and local temps, instead of doing this a posteriori in
temp_save().

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-28 14:54:21 +01:00
Aurelien Jarno
17b914912d tcg/i386: remove ld/st third argument register constraint
On x86_64, remove the constraint on the third argument register which
is not needed:
 - For loads the helper arguments are env, addr, mem_idx. The addr
   value should not be in the two first argument registers as they are
   used in tcg_out_tlb_load().
 - For stores the helper arguments are env, addr, data, mem_idx.
   The addr and data values should not be in the two first argument
   registers as they are used in tcg_out_tlb_load(). The data value
   should also not be in the two first argument registers, but could
   be in the third argument register in which case it would be already
   loaded at the right location.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-28 14:54:15 +01:00
Aurelien Jarno
166792f7bb tcg/i386: remove suboptimal register shifting
Now that CONFIG_TCG_PASS_AREG0 has been removed, it's easier to get
an optimal code for the load/store functions.

First swap the two registers used in tcg_out_tlb_load() so that the
address end-up in the second register instead of the first one. Adjust
tcg_out_qemu_ld() and tcg_out_qemu_st() to respectively call
tcg_out_qemu_ld_direct() and tcg_out_qemu_st_direct() with the correct
registers. Then replace the register shifting by direct load of the
arguments.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-28 14:54:05 +01:00
Richard Henderson
4438c8a946 exec: Allocate code_gen_prologue from code_gen_buffer
We had a hack for arm and sparc, allocating code_gen_prologue to a
special section.  Which, honestly does no good under certain cases.
We've already got limits on code_gen_buffer_size to ensure that all
TBs can use direct branches between themselves; reuse this limit to
ensure the prologue is also reachable.

As a bonus, we get to avoid marking a page of the main executable's
data segment as executable.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-10-20 07:54:04 +00:00
Aurelien Jarno
41a05a4576 Merge branch 'linux-user-for-upstream' of git://git.linaro.org/people/rikuvoipio/qemu
* 'linux-user-for-upstream' of git://git.linaro.org/people/rikuvoipio/qemu:
  linux-user: register align p{read, write}64
  linux-user: ppc: mark as long long aligned
  tcg: Remove TCG_TARGET_HAS_GUEST_BASE define
  configure: Remove unnecessary host_guest_base code
  linux-user: If loading fails, print error as string, not number
  linux-user: Fix siginfo handling
  alpha-linux-user: Fix sigaltstack structure definition
  linux-user: Implement gethostname
  linux-user: Perform more checks on iovec lists
  linux-user: fix multi-threaded /proc/self/maps
  linux-user: fix statfs
2012-10-19 20:28:22 +02:00
Richard Henderson
1414968a6a tcg: Optimize mulu2
Like add2, do operand ordering, constant folding, and dead operand
elimination.  The latter happens about 15% of all mulu2 during an
x86_64 bios boot.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-17 17:51:39 +02:00
Richard Henderson
1305c451e6 tcg: Optimize half-dead add2/sub2
When x86_64 guest is not in 64-bit mode, the high-part of the 64-bit
add is dead.  When the host is 32-bit, we can simplify to 32-bit
arithmetic.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-17 17:51:37 +02:00
Richard Henderson
212c328d61 tcg: Constant fold add2 and sub2
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-17 17:51:37 +02:00
Richard Henderson
6c4382f8f4 tcg: Do constant folding on double-word comparisons
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-17 17:51:35 +02:00
Richard Henderson
9519da7e39 tcg: Split out subroutines from do_constant_folding_cond
We can re-use these for implementing double-word folding.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-17 17:51:32 +02:00
Richard Henderson
bc1473eff4 tcg: Optimize double-word comparisons against zero
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-17 17:32:29 +02:00
Richard Henderson
6e14e91b66 tcg: Use common code when failing to optimize
This saves a whole lot of repetitive code sequences.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-17 17:32:01 +02:00
Richard Henderson
0bfcb86538 tcg: Swap commutative double-word comparisons
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-17 17:31:57 +02:00
Richard Henderson
1e484e61e2 tcg: Canonicalize add2 operand ordering
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-17 17:31:53 +02:00
Richard Henderson
24c9ae4eba tcg: Split out swap_commutative as a subroutine
Reduces code duplication and prefers

  movcond d, c1, c2, const, s
to
  movcond d, c1, c2, s, const

It also prefers

  add r, r, c
over
  add r, c, r

when both inputs are known constants.  This doesn't matter for true add, as
we will fully constant fold that.  But it matters for a follow-on patch using
this routine for add2 which may not be fully foldable.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-17 17:30:40 +02:00
Richard Henderson
c7d4475a70 tcg-ia64: Implement deposit
Note that in the general reg=reg,reg case we're restricted
to 16-bit insertions.  This makes it easy to allow "any"
constant as input, as post-truncation it will fit into the
constant load insn for which we have room in the bundle.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-17 01:26:43 +02:00
Aurelien Jarno
63975ea7df tcg/ia64: slightly optimize TLB access code
It is possible to slightly optimize the TLB access code, by replacing
the movi + and instructions by a deposit instruction.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-17 01:26:43 +02:00
Aurelien Jarno
2174d1e1ff tcg/ia64: remove suboptimal register shifting in qemu_ld/st ops
Remove suboptimal register shifting in qemu_ld/st ops, introduced at the
CONFIG_TCG_PASS_AREG0 time.

As mem_idx is now loaded in register R58/R59 for the slow path, we have
to make sure to do it last, to not add additional register constraints.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-17 01:26:43 +02:00
Aurelien Jarno
b90cf71692 tcg/ia64: implement movcond_i32/64
Implement movcond_i32/64 on ia64 hosts. It is not possible to have
immediate compare arguments without adding a new bundle, but it is
possible to have 22-bit immediate value arguments.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-17 01:26:42 +02:00
Blue Swirl
da897bf5ae tcg/ia64: use stack for TCG temps
Use stack instead of temp_buf array in CPUState for TCG temps.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-17 01:26:42 +02:00
Peter Maydell
4a1d241e3c tcg/arm: Implement movcond_i32
Implement movcond_i32 for ARM, as the sequence
  mov dst, v2   (implicitly done by the tcg common code)
  cmp c1, c2
  movCC dst, v1

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-17 01:22:49 +02:00
Peter Maydell
7fc645bf7a tcg/arm: Factor out code to emit immediate or reg-reg op
The code to emit either an immediate cmp or a register cmp insn is
duplicated in several places; factor it out into its own function.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-17 01:22:48 +02:00
Richard Henderson
203342d8dc tcg-sparc: Emit MOVR insns for setcond_i64 and movcond_64
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-10-13 10:39:53 +00:00
Richard Henderson
ab1339b9b4 tcg-sparc: Emit BPr insns for brcond_i64
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-10-13 10:39:53 +00:00
Richard Henderson
a115f3ea47 tcg-sparc: Drop use of Bicc in favor of BPcc
Now that we're always sparcv9, we can not bother using Bicc for
32-bit branches and BPcc for 64-bit branches and instead always
use BPcc.

New interfaces allow less direct use of tcg_out32 and raw numbers
inside the qemu_ld/st routines.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-10-13 10:39:53 +00:00
Richard Henderson
fd84ea2391 tcg-sparc: Optimize setcond2 equality compare with 0.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-10-13 10:39:53 +00:00
Richard Henderson
89269f6cea tcg-sparc: Use Z constraint for %g0
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-10-13 10:39:53 +00:00
Richard Henderson
4ec28e255f tcg-sparc: Fix add2/sub2
We must care not to clobber the high parts before we consume them.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-10-13 10:39:53 +00:00
Richard Henderson
7d458a7567 tcg-sparc: Fix setcond
The set of comparisons that can immediately use the carry are LTU/GEU,
not LTU/LEU.  Don't swap operands when we need a temp register; the
register may already be in use from setcond2.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-10-13 10:39:53 +00:00
Richard Henderson
a7a49843d7 tcg-sparc: Fix qemu_st for 32-bit
The datalo variable is still live in the miss path.  Use another
when reconstructing the full data value.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-10-13 10:39:53 +00:00
Richard Henderson
dda73c782f tcg-sparc: Fix setcond2
Like brcond2, use tcg_high_cond.  Use movcc instead of branches.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-10-13 10:39:52 +00:00
Richard Henderson
ded37f0d96 tcg-sparc: Implement movcond.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-10-13 10:39:52 +00:00
Richard Henderson
24c7f75459 tcg-sparc: Fix brcond2
Much the same problem as recently fixed for hppa.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-10-13 10:39:52 +00:00
Peter Maydell
07e10e5de1 tcg: Remove TCG_TARGET_HAS_GUEST_BASE define
GUEST_BASE support is now supported by all TCG backends, and is
now mandatory. Drop the now-pointless TCG_TARGET_HAS_GUEST_BASE
define (set by every backend) and the error if it is unset.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
2012-10-12 14:27:05 +03:00
Stefan Weil
d838201111 tcg: Remove redundant pointer from TCGContext
The pointer entry 'temps' always refers to the array entry 'static_temps'.
Removing the pointer and renaming 'static_temps' to 'temps' reduces the
size of TCGContext (4 or 8 byte) and allows better code generation.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-10-07 16:36:16 +00:00
Aurelien Jarno
048d3612a5 Merge branch 'trivial-patches' of git://github.com/stefanha/qemu
* 'trivial-patches' of git://github.com/stefanha/qemu:
  versatilepb: Use symbolic indices for ARM PIC
  qdev: kill bogus comment
  qemu-barrier: Fix compiler version check for future gcc versions
  hw: Add missing 'static' attribute for QEMUMachine
  cleanup useless return sentence
  qemu-sockets: Fix compiler warning (regression for MinGW)
  vnc: Fix spelling (hellmen -> hellman) in comment
  slirp: Fix spelling in comment (enought -> enough, insure -> ensure)
  tcg/arm: Use tcg_out_mov_reg rather than inline equivalent code
  cpu: Add missing 'static' attribute to qemu_global_mutex
  configure: Support empty target list (--target-list=)
  hw: Fix return value check for bdrv_read, bdrv_write
2012-10-06 18:54:14 +02:00
Richard Henderson
d1e321b82a tcg: Add tcg_high_cond
The table that was recently added for hppa is generally usable.
And with the renumbering of the TCG_COND constants it's not too
difficult to compute rather than have a table.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-06 18:48:40 +02:00
Richard Henderson
0aed257f08 tcg: Add TCG_COND_NEVER, TCG_COND_ALWAYS
There are several cases that can be handled easier inside both
translators and code generators if we have out-of-band values
for conditions.  It's easy enough to handle ALWAYS and NEVER in
the natural way inside the tcg middle-end.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-06 18:48:40 +02:00
Richard Henderson
bcc66562ad tcg: Add is_unsigned_cond
Before we rearrange the TCG_COND enumeration, add a predicate for
the (single) use of comparisons vs TCGCond.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-06 18:48:39 +02:00
Aurelien Jarno
626cd050e2 tcg: remove obsolete jmp op
The TCG jmp operation doesn't really make sense in the QEMU context, it
is unused, it is not implemented by some targets, and it is wrongly
implemented by some others.

This patch simply removes it.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Acked-by: Blue Swirl <blauwirbel@gmail.com>
Acked-by: Stefan Weil<sw@weilnetz.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-10-06 18:47:04 +02:00
Peter Maydell
f97713ff19 tcg/arm: Use tcg_out_mov_reg rather than inline equivalent code
Use the recently introduced tcg_out_mov_reg() function rather than
the equivalent inline code.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Stefan Hajnoczi <stefanha@gmail.com>
2012-10-05 14:12:36 +02:00
Stefan Weil
6673f47da2 tci: Fix for AREG0 free mode
Support for helper functions with 5 arguments was missing
in the code generator and in the interpreter.

There is no need to pass the constant TCG_AREG0 from the
code generator to the interpreter. Remove that code for
the INDEX_op_qemu_st* opcodes.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-27 21:28:25 +02:00
Aurelien Jarno
f813cb838f tcg/i386: fix build with -march < i686
The movcond_i32 op has to be protected with TCG_TARGET_HAS_movcond_i32
to fix the build with -march < i686.

Thanks to Richard Henderson for the hint.

Reported-by: Alex Barcelo <abarcelo@ac.upc.edu>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-26 00:31:50 +02:00
Richard Henderson
a80a6b63e3 tcg: Streamline movcond_i64 using movcond_i32
When movcond_i32 is available we can further reduce the generated
op count from 12 to 6, and the generated code size on i686 from
88 to 74 bytes.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-26 00:31:17 +02:00
Richard Henderson
a463133ee2 tcg: Streamline movcond_i64 using 32-bit arithmetic
Avoiding 64-bit arithmetic (outside of the compare) reduces the
generated op count from 15 to 12, and the generated code size on
i686 from 105 to 88 bytes.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-26 00:31:17 +02:00
Richard Henderson
0a209d4bb1 tcg: Sanity check goto_tb input
Checking that we don't try for idx != [01] is trivial.  Checking
that we don't issue more than one of any index requires a tad
more data and some ifdefs protecting that new variable.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-26 00:31:17 +02:00
Richard Henderson
717e70368b tcg: Sanity check deposit inputs
Given these are constants, checking once here means everything
after can assume they're correct.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-26 00:31:16 +02:00
Richard Henderson
c552d6c038 tcg: Add tcg_debug_assert
Like the C assert macro, except only enabled for CONFIG_DEBUG_TCG,
and without having to set _NDEBUG and disable all other asserts at
the same time.

The use of __builtin_unreachable (when available) gives the compiler
the same information, which may (or may not) help it optimize better.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-26 00:31:16 +02:00
Richard Henderson
77276f6581 tcg: Implement concat*_i64 with deposit_i64
For tcg_gen_concat_i32_i64 we only use deposit if the host supports it.
For tcg_gen_concat32_i64 even if the host does not, as we get identical
code before and after.

Note that this relies on the ANDI -> EXTU patch for the identity claim.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-26 00:31:16 +02:00
Richard Henderson
6f3bb33eaa tcg: Emit XORI as NOT for appropriate constants
Note that xori_i64 failed to perform even the minimal
optimizations promised by the README.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-26 00:31:16 +02:00
Richard Henderson
d81ada7fa4 tcg: Optimize initial inputs for ori_i64
Copy the same optimizations from ori_i32.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-26 00:31:16 +02:00
Richard Henderson
42ce3e2015 tcg: Emit ANDI as EXTU for appropriate constants
Note that andi_i64 failed to perform even the minimal
optimizations promised by the README.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-26 00:31:16 +02:00
Richard Henderson
5a696f6ac0 tcg: Adjust descriptions of *cond opcodes
The README file documented the operand ordering of the tcg_gen_*
functions.  Since we're documenting opcodes here, use the true
operand ordering.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Cc: malc <av1474@comtv.ru>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-26 00:31:16 +02:00
Aurelien Jarno
8f06bf693d tcg/mips: fix MIPS32(R2) detection
Fix the MIPS32(R2) cpu detection so that it also works with
-march=octeon. Thanks to Andrew Pinski for the hint.

Cc: Andrew Pinski <apinski@cavium.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-26 00:31:16 +02:00
Blue Swirl
04cbbdeefd Merge branch 'tcg-sparc' of git://repo.or.cz/qemu/rth
* 'tcg-sparc' of git://repo.or.cz/qemu/rth:
  tcg-sparc: Preserve branch destinations during retranslation
  tcg-sparc: Fix and enable direct TB chaining.
  tcg-sparc: Add %g/%o registers to alloc_order
  tcg-sparc: Use defines for temporaries.
  tcg-sparc: Mask shift immediates to avoid illegal insns.
  tcg-sparc: Clean up cruft stemming from attempts to use global registers.
  tcg-sparc: Change AREG0 in generated code to %i0.
  tcg-sparc: Support GUEST_BASE.
  tcg-sparc: Fix qemu_ld/st to handle 32-bit host.
  tcg-sparc: Assume v9 cpu always, i.e. force v8plus in 32-bit mode.
  tcg-sparc: Don't MAP_FIXED on top of the program
  tcg-sparc: Fix ADDX opcode.
  tcg-sparc: Hack in qemu_ld/st64 for 32-bit.
  linux-user: Use memcpy in get_user/put_user.
2012-09-22 17:59:15 +00:00
Aurelien Jarno
e809c0dc70 Revert "tcg/mips"
This reverts commit ad49d1f751.

This commit was not supposed to be pushed.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-22 19:24:49 +02:00
malc
23f3ff2604 tcg/ppc32: Implement movcond32
Thanks to Richard Henderson

Signed-off-by: malc <av1474@comtv.ru>
2012-09-22 19:16:51 +04:00
Aurelien Jarno
ad49d1f751 tcg/mips 2012-09-22 17:07:23 +02:00
Stefan Weil
6e17d0c5cd tcg: Remove tcg_target_get_call_iarg_regs_count
The TCG targets no longer need individual implementations.

Since commit 6a18ae2d29,
'flags' is no longer used in tcg_target_get_call_iarg_regs_count.

The remaining tcg_target_get_call_iarg_regs_count is trivial and only
called once. Therefore the patch eliminates it completely.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-22 16:52:37 +02:00
Stefan Weil
d73685e3c3 tcg/i386: Remove unused registers from tcg_target_call_iarg_regs
32 bit x86 hosts don't need registers for helper function arguments
because they use the default stack based calling convention.

Removing the registers allows simpler code for function
tcg_target_get_call_iarg_regs_count.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-22 16:52:37 +02:00
Stefan Weil
b18212c668 tcg/i386: Add shortcuts for registers used in L constraint
While 64 bit hosts use the first three registers which are also used
as function input parameters, 32 bit hosts use TCG_REG_EAX and
TCG_REG_EDX which are not used in parameter passing.

After defining new register macros for the registers used in L
constraint, the patch replaces most occurrences of
tcg_target_call_iarg_regs[0], tcg_target_call_iarg_regs[1] and
tcg_target_call_iarg_regs[2] by those new macros.

tcg_target_call_iarg_regs remains unchanged when it is used for input
arguments (only with 64 bit hosts) before tcg_out_calli.

A comment related to those registers was fixed, too.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
[aurel32: build fix on i386, small optimization for i386 in the prologue]
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-22 16:52:37 +02:00
Stefan Weil
1b7621ad99 w64: Fix TCG helper functions with 5 arguments
TCG uses 6 registers for function arguments on 64 bit Linux hosts,
but only 4 registers on W64 hosts.

Commit 2999a0b200 increased the number
of arguments for some important helper functions from 4 to 5
which triggered a bug for W64 hosts: QEMU aborts when executing
helper_lcall_real in the guest's BIOS because function
tcg_target_get_call_iarg_regs_count always returned 6.

As W64 has only 4 registers for arguments, the 5th argument must be
passed on the stack using a correct stack offset.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-22 15:10:22 +02:00
Max Filippov
9bacf41431 tcg/README: document tcg_gen_goto_tb restrictions
See
http://lists.nongnu.org/archive/html/qemu-devel/2012-09/msg03196.html
for the whole story.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-22 15:10:22 +02:00
Richard Henderson
f0da375754 tcg-hppa: Implement movcond
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-22 15:10:22 +02:00
Aurelien Jarno
7ef55fc919 tcg/optimize: add constant folding for deposit
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-22 15:10:21 +02:00
Aurelien Jarno
fba3161fd2 tcg: remove #ifdef #endif around TCGOpcode tests
Commit 25c4d9cc changed all TCGOpcode enums to be available, so we don't
need to #ifdef #endif the one that are available only on some targets.
This makes the code easier to read.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-22 15:10:21 +02:00
Aurelien Jarno
c2b0e2fea2 tcg/optimize: prefer the "op a, a, b" form for commutative ops
The "op a, a, b" form is better handled on non-RISC host than the "op
a, b, a" form, so swap the arguments to this form when possible, and
when b is not a constant.

This reduces the number of generated instructions by a tiny bit.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-22 15:10:21 +02:00
Aurelien Jarno
b336ceb691 tcg/optimize: further optimize brcond/movcond/setcond
When both argument of brcond/movcond/setcond are the same or when one
of the two values is a constant equal to zero, it's possible to do
further optimizations.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-22 15:10:21 +02:00
Aurelien Jarno
3c94193e0b tcg/optimize: optimize "op r, a, a => movi r, 0"
Now that it's possible to detect copies, we can optimize the case
the "op r, a, a => movi r, 0". This helps in the computation of
overflow flags when one of the two args is 0.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-22 15:10:21 +02:00
Aurelien Jarno
0aba1c7376 tcg/optimize: optimize "op r, a, a => mov r, a"
Now that we can easily detect all copies, we can optimize the
"op r, a, a => mov r, a" case a bit more.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-22 15:10:21 +02:00
Aurelien Jarno
1ff8c5418a tcg/optimize: do copy propagation for all operations
It is possible to due copy propagation for all operations, even the one
that have side effects or clobber arguments (it only concerns input
arguments). That said, the call operation should be handled differently
due to the variable number of arguments.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-22 15:10:21 +02:00
Aurelien Jarno
e590d4e6b3 tcg/optimize: rework copy progagation
The copy propagation pass tries to keep track what is a copy of what
and what has copy of what, and in addition it keep a circular list of
of all the copies. Unfortunately this doesn't fully work: a mov from
a temp which has a state "COPY" changed it into a state "HAS_COPY".
Later when this temp is used again, it is considered has not having
copy and thus no propagation is done.

This patch fixes that by removing the hiearchy between copies, and thus
only keeping a "COPY" state both meaning "is a copy" and "has a copy".
The decision of which copy to use is deferred to the actual temp
replacement. At this stage there is not one best choice to do, but only
better choices than others. For doing the best choice the operation
would have to be parsed in reversed to know if a temp is going to be
used later or not. That what is done by the liveness analysis. At this
stage it is known that globals will be always live, that local temps
will be dead at the end of the translation block, and that the temps
will be dead at the end of the basic block. This means that this stage
should try to replace temps by local temps or globals and local temps
by globals.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-22 15:10:21 +02:00
Aurelien Jarno
b80bb016d8 tcg/optimize: check types in copy propagation
The copy propagation doesn't check the types of the temps during copy
propagation. However TCG is using the mov_i32 for the i64 to i32
conversion and thus the two are not equivalent.

With this patch tcg_opt_gen_mov() doesn't consider two temps of
different type as copies anymore.

So far it seems the optimization was not aggressive enough to trigger
this bug, but it will be triggered later in this series once the copy
propagation is improved.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-22 15:10:20 +02:00
Aurelien Jarno
48b56ce168 tcg/optimize: remove TCG_TEMP_ANY
TCG_TEMP_ANY has no different meaning than TCG_TEMP_UNDEF, so use
the later instead.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-22 15:10:20 +02:00
Aurelien Jarno
7d7c4930ab tcg/mips: implement movcond op on MIPS32R2
movcond operation can be implemented on MIPS32 Release 2 using the MOVN,
MOVZ, SLT and SLTU instructions.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-22 15:10:20 +02:00
Aurelien Jarno
04f71aa3fd tcg/mips: implement deposit op on MIPS32R2
deposit operations can be optimized on MIPS32 Release 2 using the INS
instruction.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-22 15:10:20 +02:00
Aurelien Jarno
9a152519a9 tcg/mips: implement rotl/rotr ops on MIPS32R2
rotr operations can be optimized on MIPS32 Release 2 using the ROTR and
ROTRV instructions. Also implemented rotl operations by subtracting the
shift from 32.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-22 15:10:20 +02:00
Aurelien Jarno
c1cf85c9ac tcg/mips: optimize bswap{16,16s,32} on MIPS32R2
bswap operations can be optimized on MIPS32 Release 2 using the ROTR,
WSBH and SEH instructions. We can't use the non-R2 code to implement the
ops due to registers constraints, so don't define the corresponding
TCG_TARGET_HAS_bswap* values.

Also bswap16* operations are supposed to be called with the 16 high bits
zeroed. This is the case everywhere (including for TCG by definition)
except when called from the store helper. Remove the AND instructions from
bswap16* and move it there.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-22 15:10:20 +02:00
Aurelien Jarno
0f46c064ee tcg/mips: optimize brcond arg, 0
MIPS has some conditional branch instructions when comparing with zero.
Use them.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-22 15:10:20 +02:00
Aurelien Jarno
0d0b53a670 tcg/mips: use stack for TCG temps
Use stack instead of temp_buf array in CPUState for TCG
temps.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-22 15:10:19 +02:00
Aurelien Jarno
3314e0089f tcg/mips: don't use global pointer
Don't use the global pointer in TCG, in case helpers try access global
variables.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-22 15:10:19 +02:00
Aurelien Jarno
5a0eed379d tcg/mips: use TCGArg or TCGReg instead of int
Instead of int, use the correct TCGArg and TCGReg type: TCGReg when
representing a TCG target register, TCGArg when representing the latter
or a constant.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-22 15:10:19 +02:00
Aurelien Jarno
0834c9eac3 tcg/mips: kill warnings in user mode
Recent versions of GCC emit warnings when compiling user mode targets.
Kill them by reordering a bit the #ifdef.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-22 15:10:19 +02:00
Aurelien Jarno
2ceb3a9e0f tcg-mips: fix wrong usage of 'Z' constraint
The 'Z' constraint has been introduced to map the zero register. However
when the op also accept a constant, there is no point to accept the zero
register in addition.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2012-09-22 15:10:19 +02:00
Richard Henderson
f4bf0b912e tcg-sparc: Preserve branch destinations during retranslation
Signed-off-by: Richard Henderson <rth@twiddle.net>
2012-09-21 22:02:21 +02:00
Richard Henderson
5bbd2cae8e tcg-sparc: Fix and enable direct TB chaining.
Signed-off-by: Richard Henderson <rth@twiddle.net>
2012-09-21 22:02:20 +02:00
Richard Henderson
26adfb759c tcg-sparc: Add %g/%o registers to alloc_order
Signed-off-by: Richard Henderson <rth@twiddle.net>
2012-09-21 22:02:20 +02:00
Richard Henderson
375816f84b tcg-sparc: Use defines for temporaries.
And change from %i4/%i5 to %g1/%o7 to remove a v8plus fixme.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2012-09-21 22:02:19 +02:00
Richard Henderson
1fd9594665 tcg-sparc: Mask shift immediates to avoid illegal insns.
The xtensa-test image generates a sra_i32 with count 0x40.
Whether this is accident of tcg constant propagation or
originating directly from the instruction stream is immaterial.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2012-09-21 22:02:19 +02:00
Richard Henderson
4c3204cb12 tcg-sparc: Clean up cruft stemming from attempts to use global registers.
Don't use -ffixed-gN.  Don't link statically.  Don't save/restore
AREG0 around calls.  Don't allocate space on the stack for AREG0 save.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2012-09-21 22:02:19 +02:00
Richard Henderson
0c554161b6 tcg-sparc: Change AREG0 in generated code to %i0.
We can now move the TCG variable from %g[56] to a call-preserved
windowed register.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2012-09-21 22:02:18 +02:00
Richard Henderson
c6f7e4fb9a tcg-sparc: Support GUEST_BASE.
Signed-off-by: Richard Henderson <rth@twiddle.net>
2012-09-21 22:02:18 +02:00
Richard Henderson
a0ce341aac tcg-sparc: Fix qemu_ld/st to handle 32-bit host.
At the same time, split out the tlb load logic to a new function.
Fixes the cases of two data registers and two address registers.
Fixes the signature of, and adds missing, qemu_ld/st opcodes.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2012-09-21 22:02:17 +02:00
Richard Henderson
9b9c37c364 tcg-sparc: Assume v9 cpu always, i.e. force v8plus in 32-bit mode.
Current code doesn't actually work in 32-bit mode at all.  Since
no one really noticed, drop the complication of v7 and v8 cpus.
Eliminate the --sparc_cpu configure option and standardize macro
testing on TCG_TARGET_REG_BITS / HOST_LONG_BITS

Signed-off-by: Richard Henderson <rth@twiddle.net>
2012-09-21 22:02:16 +02:00