Reduce ifdefs, share more code between paths, reduce the number of TCG
ops generated. Avoid re-computing the size of the operation across
gen_pop_T0 and gen_pop_update.
Add forgotten zero-extension in the TARGET_X86_64, !CODE64, ss32 case.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reduce ifdefs, share more code between paths, reduce the number of TCG
ops generated.
Add forgotten zero-extension in the TARGET_X86_64, !CODE64, ss32 case.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Unlike the addr32, there was no bug. But we can use the same
technique to reduce the number of TCG ops.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Changing the domain to TCGMemOp makes it easier to interoperate
with other portions of the rest of the translator.
We now only have one domain for size operands inside the translator,
which makes things less confusing all the way around. There are
still a number of helpers that continue to use the log2-1 domain.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Change the domain of the parameter and update all callers.
Which lets us defer completely to gen_op_mov_reg_v.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Changing the domain to TCGMemOp makes it easier to interoperate
with other portions of the rest of the translator.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Change the domain of the parameter and update all callers.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
These functions used the aflags/dflags domain, which is log2-1
of the byte size. Confusingly, they used enumeration values
from the log2 domain.
Change the domain of the parameter and update all callers.
Since we're now in a common domain, defer the deposit/extend/mov
decision to gen_op_mov_reg_v.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The 'ot' variables (operand type?) hold the log2(byte size) of
the operand being manipulated. This is the same as the MO_SIZE
subset of the TCGMemOp. Indeed, we often pass 'ot' to the
tcg_gen_qemu_ld/st functions.
Changing the type from 'int' makes it easier to see what domain
the variable should be.
This does require adding some default cases to some switch statements,
to avoid the 'unhandled enumeration value' warning that would result
from the change of type.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Replace it with tcg_gen_ext16u_tl, and in two cases merge with a
previous move from cpu_regs.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Replace it with tcg_gen_ext16u_tl. In four places we can combine that
with a previous move into cpu_T[0], and in one place we can infer that
the zero-extension has already happened via the previous load.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Propagate the definitions into all users. In two cases, this allows
us to share code between the 32-bit and 64-bit immediate moves.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Propagate the definitions into all users. The only time that
gen_op_movl_T1_imu was used, the input was type 'unsigned',
so the replacement works identically.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Propagate the definition of gen_op_movl_T0_im to all users.
The function gen_op_movl_T0_imu was unused.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
For the known MO_32/MO_64 cases, we don't need to extend a 32-bit temp
into a 64-bit temp before storing into the hardware register.
We do need the extension for the MO_8/MO_16 cases, in order for the
deposit_tl operation to work, so leave those alone.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
We can now use tcg_gen_qemu_st_i32 directly to avoid the extension.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
We can now use tcg_gen_qemu_ld_i32 directly to avoid the truncation.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
For the 16 and 32-bit cases, we don't need to truncate via
a temporary register.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The reg_ptr and offset_ptr outputs are universally unused.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Always perform a sign-extending load. In the extremely unlikely
case that we've used an 0x66 prefix, the extension to 64-bits is
unnecessary but not wrong; the store will still examine only 16 bits.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
We can use the MO_SIGN bit to tidy the reg-reg switch statement
as well as pass it on to gen_op_ld_v, eliminating one call.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
By inspection, obviously we should be storing T[1] not T[0].
This could only happen for x86_64 in 64-bit mode with 0x66
prefix to call insn -- i.e. never.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Too many places have the same test vs OR_TMP0 to indicate
a write back to memory. Hoist that to a subroutine.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Replace its users by gen_op_ld_v with the MO_SIGN bit set.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The MO_8/16/32/64 constants have the same encoding and meaning
as the OT_BYTE/WORD/LONG/QUAD. Since we rely on them being the
same, for the qemu_ld/st helpers, standardize on the common names.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Add support for FCVT between half, single and double precision.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This patch adds support for those instructions in the "Floating-point
data-processing (1 source)" group which are simple 32-bit-to-32-bit
or 64-bit-to-64-bit operations (ie everything except FCVT between
single/double/half precision).
We put the new round-to-int helpers in helper.c because they will
also be used by the new ARMv8 A32/T32 rounding instructions.
Signed-off-by: Alexander Graf <agraf@suse.de>
[WN: Commit message tweak, merged single and double precision patches,
updated to new infrastructure.]
Signed-off-by: Will Newton <will.newton@linaro.org>
[PMM: reworked decode, split FCVT out into their own patch]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Add support for the AArch64 floating-point <-> integer conversion
instructions to disas_fpintconv. In the process we can rearrange
and simplify the detection of unallocated encodings a little.
We also correct a typo in the instruction encoding diagram for this
instruction group: bit 21 is 1, not 0.
Signed-off-by: Will Newton <will.newton@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This patch adds emulation for the instruction group labeled
"Floating-point <-> fixed-point conversions" in the ARM ARM.
Namely this includes the instructions SCVTF, UCVTF, FCVTZS, FCVTZU
(scalar, fixed-point).
Signed-off-by: Alexander Graf <agraf@suse.de>
[WN: Commit message tweak, rebased, updated to new infrastructure.
Applied bug fixes from Michael Matz and Janne Grunau.]
Signed-off-by: Will Newton <will.newton@linaro.org>
[PMM: significant cleanup]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Define the full set of floating point to fixed point conversion
helpers required to support AArch64.
Signed-off-by: Will Newton <will.newton@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
The VFP fixed point conversion helpers first call float_scalbn and
then convert the result to an integer. This scalbn operation may
set floating point exception flags for:
* overflow & inexact (if it overflows to infinity)
* input denormal squashed to zero
* output denormal squashed to zero
Of these, we only care about the input-denormal flag, since
the output of the whole scale-and-convert operation will be
an integer (so squashed-output-denormal and overflow don't
apply). Suppress the others by saving the pre-scalb exception
flags and only copying across a potential input-denormal flag.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
The VFP conversion helpers for A32 round to zero as this is the only
rounding mode supported. Rename these helpers to make it clear that
they round to zero and are not suitable for use in the AArch64 code.
Signed-off-by: Will Newton <will.newton@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Make the VFP_CONV_FIX helpers a little more flexible in
preparation for the A64 uses. This requires two changes:
* use the correct softfloat conversion function based on itype
rather than always the int32 one; this is possible now that
softfloat provides int16 versions and necessary for the
future conversion-to-int64 A64 variants. This also allows
us to drop the awkward 'sign' macro argument.
* split the 'fsz' argument which currently controls both
width of the input float type and width of the output
integer type into two; this will allow us to specify the
A64 64-bit-int-to-single conversion function, where the
two widths are different.
We can also drop the (itype##_t) cast now that softfloat
guarantees that all the itype##_to_float* functions take
an integer argument of exactly the correct type.
Signed-off-by: Will Newton <will.newton@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
IEEE754-2008 specifies a new rounding mode:
"roundTiesToAway: the floating-point number nearest to the infinitely
precise result shall be delivered; if the two nearest floating-point
numbers bracketing an unrepresentable infinitely precise result are
equally near, the one with larger magnitude shall be delivered."
Implement this new mode (it is needed for ARM). The general principle
is that the required code is exactly like the ties-to-even code,
except that we do not need to do the "in case of exact tie clear LSB
to round-to-even", because the rounding operation naturally causes
the exact tie to round up in magnitude.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>