qemu/target
Peter Maydell 266bd25c48 target/arm: Convert VFP VMLA to decodetree
Convert the VFP VMLA instruction to decodetree.

This is the first of the VFP 3-operand data processing instructions,
so we include in this patch the code which loops over the elements
for an old-style VFP vector operation. The existing code to do this
looping uses the deprecated cpu_F0s/F0d/F1s/F1d TCG globals; since
we are going to be converting instructions one at a time anyway
we can take the opportunity to make the new loop use TCG temporaries,
which means we can do that conversion one operation at a time
rather than needing to do it all in one go.

We include an UNDEF check which was missing in the old code:
short-vector operations (with stride or length non-zero) were
deprecated in v7A and must UNDEF in v8A, so if the MVFR0 FPShVec
field does not indicate that support for short vectors is present
we UNDEF the operations that would use them. (This is a change
of behaviour for Cortex-A7, Cortex-A15 and the v8 CPUs, which
previously were all incorrectly allowing short-vector operations.)

Note that the conversion fixes a bug in the old code for the
case of VFP short-vector "mixed scalar/vector operations". These
happen where the destination register is in a vector bank but
but the second operand is in a scalar bank. For example
  vmla.f64 d10, d1, d16   with length 2 stride 2
is equivalent to the pair of scalar operations
  vmla.f64 d10, d1, d16
  vmla.f64 d8, d3, d16
where the destination and first input register cycle through
their vector but the second input is scalar (d16). In the
old decoder the gen_vfp_F1_mul() operation uses cpu_F1{s,d}
as a temporary output for the multiply, which trashes the
second input operand. For the fully-scalar case (where we
never do a second iteration) and the fully-vector case
(where the loop loads the new second input operand) this
doesn't matter, but for the mixed scalar/vector case we
will end up using the wrong value for later loop iterations.
In the new code we use TCG temporaries and so avoid the bug.
This bug is present for all the multiply-accumulate insns
that operate on short vectors: VMLA, VMLS, VNMLA, VNMLS.

Note 2: the expression used to calculate the next register
number in the vector bank is not in fact correct; we leave
this behaviour unchanged from the old decoder and will
fix this bug later in the series.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-13 15:14:04 +01:00
..
alpha Include qemu-common.h exactly where needed 2019-06-12 13:20:20 +02:00
arm target/arm: Convert VFP VMLA to decodetree 2019-06-13 15:14:04 +01:00
cris Supply missing header guards 2019-06-12 13:20:21 +02:00
hppa Include qemu-common.h exactly where needed 2019-06-12 13:20:20 +02:00
i386 Miscellaneous patches for 2019-06-11 2019-06-12 13:50:02 +01:00
lm32 Include qemu-common.h exactly where needed 2019-06-12 13:20:20 +02:00
m68k Include qemu-common.h exactly where needed 2019-06-12 13:20:20 +02:00
microblaze Supply missing header guards 2019-06-12 13:20:21 +02:00
mips Include qemu-common.h exactly where needed 2019-06-12 13:20:20 +02:00
moxie Supply missing header guards 2019-06-12 13:20:21 +02:00
nios2 Include qemu-common.h exactly where needed 2019-06-12 13:20:20 +02:00
openrisc Include qemu-common.h exactly where needed 2019-06-12 13:20:20 +02:00
ppc ppc patch queue 2019-06-12 2019-06-12 14:43:47 +01:00
riscv Supply missing header guards 2019-06-12 13:20:21 +02:00
s390x Include qemu-common.h exactly where needed 2019-06-12 13:20:20 +02:00
sh4 Include qemu-common.h exactly where needed 2019-06-12 13:20:20 +02:00
sparc Include qemu-common.h exactly where needed 2019-06-12 13:20:20 +02:00
tilegx Normalize position of header guard 2019-06-12 13:20:20 +02:00
tricore Supply missing header guards 2019-06-12 13:20:21 +02:00
unicore32 Include qemu-common.h exactly where needed 2019-06-12 13:20:20 +02:00
xtensa Include qemu-common.h exactly where needed 2019-06-12 13:20:20 +02:00