migration/qemu-file.h neglects to include it even though it needs
ram_addr_t. Fix that. Drop a few superfluous inclusions elsewhere.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190812052359.30071-14-armbru@redhat.com>
This patch fixes two problems:
(1) The inputs to the EXTR insn were reversed,
(2) The input constraints use rZ, which means that we need to use
the REG0 macro in order to supply XZR for a constant 0 input.
Fixes: 464c2969d5
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
On a 64-bit host, discard any replications of the 32-bit
sign bit when performing the shift and merge.
Fixes: https://bugs.launchpad.net/bugs/1834496
Tested-by: Christophe Lyon <christophe.lyon@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This operation can always be emitted, even if we need to
fall back to xor. Adjust the assertions to match.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Commit 269bd5d8 "cpu: Move the softmmu tlb to CPUNegativeOffsetState'
broke the RISC-V host build as there are two variables that are used but
not defined.
This patch renames the undefined variables mask_off and table_off to the
existing (but unused) mask_ofs and table_ofs variables.
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <79729cc88ca509e08b5c4aa0aa8a52847af70c0f.1561039316.git.alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Basically, the context could get the MachineState reference via call
chains or unrecommended qdev_get_machine() in !CONFIG_USER_ONLY mode.
A local variable of the same name would be introduced in the declaration
phase out of less effort OR replace it on the spot if it's only used
once in the context. No semantic changes.
Signed-off-by: Like Xu <like.xu@linux.intel.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20190518205428.90532-4-like.xu@linux.intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
No header includes qemu-common.h after this commit, as prescribed by
qemu-common.h's file comment.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20190523143508.25387-5-armbru@redhat.com>
[Rebased with conflicts resolved automatically, except for
include/hw/arm/xlnx-zynqmp.h hw/arm/nrf51_soc.c hw/arm/msf2-soc.c
block/qcow2-refcount.c block/qcow2-cluster.c block/qcow2-cache.c
target/arm/cpu.h target/lm32/cpu.h target/m68k/cpu.h target/mips/cpu.h
target/moxie/cpu.h target/nios2/cpu.h target/openrisc/cpu.h
target/riscv/cpu.h target/tilegx/cpu.h target/tricore/cpu.h
target/unicore32/cpu.h target/xtensa/cpu.h; bsd-user/main.c and
net/tap-bsd.c fixed up]
There was nothing armv7 specific about the bic+cmp sequence, however
looking at the set of guests more closely shows that the 8-bit immediate
operand for the bic can only be satisfied with one guest in tree:
baseline m-profile -- 10-bit pages with aligned 4-byte memory ops.
Therefore it does not seem useful to keep this path.
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This changes the code generation for the tlb from e.g.
ldr ip, [r6, #-0x10]
ldr r2, [r6, #-0xc]
and ip, ip, r4, lsr #8
ldrd r0, r1, [r2, ip]!
ldr r2, [r2, #0x18]
to
ldrd r0, r1, [r6, #-0x10]
and r0, r0, r4, lsr #8
ldrd r2, r3, [r1, r0]!
ldr r1, [r1, #0x18]
for armv7 hosts. Rearranging the register allocation in
order to avoid overlap between the two ldrd pairs causes
the patch to be larger than it ordinarily would be.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We have for some time had code within the tcg backends to
handle large positive offsets from env. This move makes
sure that need not happen. Indeed, we are able to assert
at build time that simple offsets suffice for all hosts.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Move all softmmu tlb data into this structure. Arrange the
members so that we are able to place mask+table together and
at a smaller absolute offset from ENV.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This instruction raises #GP, aka SIGSEGV, if the effective address
is not aligned to 16-bytes.
We have assertions in tcg-op-gvec.c that the offset from ENV is
aligned, for vector types <= V128. But the offset itself does not
validate that the final pointer is aligned -- one must also remember
to use the QEMU_ALIGNED() attribute on the vector member within ENV.
PowerPC Altivec has vector load/store instructions that silently
discard the low 4 bits of the address, making alignment mistakes
difficult to discover. Aid that by making the most popular host
visibly signal the error.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The allows immediates to be used for ORR and BIC,
as well as the trivial inversions, ORC and AND.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Use MOVI+ORR or MVNI+BIC in order to build some vector constants,
as opposed to dropping them to the constant pool. This includes
all 16-bit constants and a similar set of 32-bit constants.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
There are several sub-classes of vector immediate, and only MOVI
can use them all. This will enable usage of MVNI and ORRI, which
use progressively fewer sub-classes.
This patch adds no new functionality, merely splits the function
and moves part of the logic into tcg_out_dupi_vec.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The instruction set has 3 insns that perform the same operation,
only varying in which operand must overlap the destination. We
can represent the operation without overlap and choose based on
the operands seen.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Using umin(a, b) == a as an expansion for TCG_COND_LEU is a
better alternative to (a - INT_MIN) <= (b - INT_MIN).
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We already had backend support for this feature. Expand the new
cmpsel opcode using vpblendb. The combination allows us to avoid
an extra NOT for some comparison codes.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
If INDEX_op_foo is always expanded by tcg_expand_vec_op, then
there may be no reasonable set of constraints to return from
tcg_target_op_def for that opcode.
Let TCG_TARGET_HAS_foo be specified as -1 in that case. Thus a
boolean test for TCG_TARGET_HAS_foo is true, but we will not
assert within process_op_defs when no constraints are specified.
Compare this with tcg_can_emit_vec_op, which already uses this
tri-state indication.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Perform a per-element conditional move. This combination operation is
easier to implement on some host vector units than plain cmp+bitsel.
Omit the usual gvec interface, as this is intended to be used by
target-specific gvec expansion call-backs.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This operation performs d = (b & a) | (c & ~a), and is present
on a majority of host vector units. Include gvec expanders.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The paths through tcg_gen_dup_mem_vec and through MO_128 were
missing the check_size_align. The path through MO_128 was also
missing the expand_clr. This last was not visible because the
only user is ARM SVE, which would set oprsz == maxsz, and not
require the clear.
Fix by adding the check_size_align and using do_dup directly
instead of duplicating the check in tcg_gen_gvec_dup_{i32,i64}.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The VBROADCASTSD instruction only allows %ymm registers as destination.
Rather than forcing VEX.L and writing to the entire 256-bit register,
revert to using MOVDDUP with an %xmm register. This is sufficient for
an avx1 host since we do not support TCG_TYPE_V256 for that case.
Also fix the 32-bit avx2, which should have used VPBROADCASTW.
Fixes: 1e262b49b5
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reported-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Remove a function of the same name from target/arm/.
Use a branchless implementation of abs gleaned from gcc.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Allow expansion either via shift by scalar or by replicating
the scalar for shift by vector.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
v3: Use a private structure for do_gvec_shifts.
The gvec expanders perform a modulo on the shift count. If the target
requires alternate behaviour, then it cannot use the generic gvec
expanders anyway, and will have to have its own custom code.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Allow the backend to expand dup from memory directly, instead of
forcing the value into a temp first. This is especially important
if integer/vector register moves do not exist.
Note that officially tcg_out_dupm_vec is allowed to fail.
If it did, we could fix this up relatively easily:
VECE == 32/64:
Load the value into a vector register, then dup.
Both of these must work.
VECE == 8/16:
If the value happens to be at an offset such that an aligned
load would place the desired value in the least significant
end of the register, go ahead and load w/garbage in high bits.
Load the value w/INDEX_op_ld{8,16}_i32.
Attempt a move directly to vector reg, which may fail.
Store the value into the backing store for OTS.
Load the value into the vector reg w/TCG_TYPE_I32, which must work.
Duplicate from the vector reg into itself, which must work.
All of which is well and good, except that all supported
hosts can support dupm for all vece, so all of the failure
paths would be dead code and untestable.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The LD1R instruction does all the work. Note that the only
useful addressing mode is a base register with no offset.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This case is similar to INDEX_op_mov_* in that we need to do
different things depending on the current location of the source.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
v3: Added some commentary to the tcg_reg_alloc_* functions.
The i386 backend already has these functions, and the aarch64 backend
could easily split out one. Nothing is done with these functions yet,
but this will aid register allocation of INDEX_op_dup_vec in a later patch.
Adjust the aarch64 tcg_out_dupi_vec signature to match the new interface.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
PowerPC Altivec does not support direct moves between vector registers
and general registers. So when tcg_out_mov fails, we can use the
backing memory for the temporary to perform the move.
Acked-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This patch merely changes the interface, aborting on all failures,
of which there are currently none.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We have a function that takes an additional condition parameter
over the standard backend interface. It already takes care of
eliding no-op moves.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The only fixed_reg is cpu_env, and it should not be modified
during any TB. Therefore code that tries to special-case moves
into a fixed_reg is dead. Remove it.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Replace the single opcode in .opc with a null-terminated
array in .opt_opc. We still require that all opcodes be
used with the same .vece.
Validate the contents of this list with CONFIG_DEBUG_TCG.
All tcg_gen_*_vec functions will check any list active
during .fniv expansion. Swap the active list in and out
as we expand other opcodes, or take control away from the
front-end function.
Convert all existing vector aware front ends.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>