Propagate the definitions into all users. In two cases, this allows
us to share code between the 32-bit and 64-bit immediate moves.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Propagate the definitions into all users. The only time that
gen_op_movl_T1_imu was used, the input was type 'unsigned',
so the replacement works identically.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Propagate the definition of gen_op_movl_T0_im to all users.
The function gen_op_movl_T0_imu was unused.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
For the known MO_32/MO_64 cases, we don't need to extend a 32-bit temp
into a 64-bit temp before storing into the hardware register.
We do need the extension for the MO_8/MO_16 cases, in order for the
deposit_tl operation to work, so leave those alone.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
We can now use tcg_gen_qemu_st_i32 directly to avoid the extension.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
We can now use tcg_gen_qemu_ld_i32 directly to avoid the truncation.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
For the 16 and 32-bit cases, we don't need to truncate via
a temporary register.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The reg_ptr and offset_ptr outputs are universally unused.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Always perform a sign-extending load. In the extremely unlikely
case that we've used an 0x66 prefix, the extension to 64-bits is
unnecessary but not wrong; the store will still examine only 16 bits.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
We can use the MO_SIGN bit to tidy the reg-reg switch statement
as well as pass it on to gen_op_ld_v, eliminating one call.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
By inspection, obviously we should be storing T[1] not T[0].
This could only happen for x86_64 in 64-bit mode with 0x66
prefix to call insn -- i.e. never.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Too many places have the same test vs OR_TMP0 to indicate
a write back to memory. Hoist that to a subroutine.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Replace its users by gen_op_ld_v with the MO_SIGN bit set.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The MO_8/16/32/64 constants have the same encoding and meaning
as the OT_BYTE/WORD/LONG/QUAD. Since we rely on them being the
same, for the qemu_ld/st helpers, standardize on the common names.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Add support for FCVT between half, single and double precision.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This patch adds support for those instructions in the "Floating-point
data-processing (1 source)" group which are simple 32-bit-to-32-bit
or 64-bit-to-64-bit operations (ie everything except FCVT between
single/double/half precision).
We put the new round-to-int helpers in helper.c because they will
also be used by the new ARMv8 A32/T32 rounding instructions.
Signed-off-by: Alexander Graf <agraf@suse.de>
[WN: Commit message tweak, merged single and double precision patches,
updated to new infrastructure.]
Signed-off-by: Will Newton <will.newton@linaro.org>
[PMM: reworked decode, split FCVT out into their own patch]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Add support for the AArch64 floating-point <-> integer conversion
instructions to disas_fpintconv. In the process we can rearrange
and simplify the detection of unallocated encodings a little.
We also correct a typo in the instruction encoding diagram for this
instruction group: bit 21 is 1, not 0.
Signed-off-by: Will Newton <will.newton@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This patch adds emulation for the instruction group labeled
"Floating-point <-> fixed-point conversions" in the ARM ARM.
Namely this includes the instructions SCVTF, UCVTF, FCVTZS, FCVTZU
(scalar, fixed-point).
Signed-off-by: Alexander Graf <agraf@suse.de>
[WN: Commit message tweak, rebased, updated to new infrastructure.
Applied bug fixes from Michael Matz and Janne Grunau.]
Signed-off-by: Will Newton <will.newton@linaro.org>
[PMM: significant cleanup]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Define the full set of floating point to fixed point conversion
helpers required to support AArch64.
Signed-off-by: Will Newton <will.newton@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
The VFP fixed point conversion helpers first call float_scalbn and
then convert the result to an integer. This scalbn operation may
set floating point exception flags for:
* overflow & inexact (if it overflows to infinity)
* input denormal squashed to zero
* output denormal squashed to zero
Of these, we only care about the input-denormal flag, since
the output of the whole scale-and-convert operation will be
an integer (so squashed-output-denormal and overflow don't
apply). Suppress the others by saving the pre-scalb exception
flags and only copying across a potential input-denormal flag.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
The VFP conversion helpers for A32 round to zero as this is the only
rounding mode supported. Rename these helpers to make it clear that
they round to zero and are not suitable for use in the AArch64 code.
Signed-off-by: Will Newton <will.newton@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Make the VFP_CONV_FIX helpers a little more flexible in
preparation for the A64 uses. This requires two changes:
* use the correct softfloat conversion function based on itype
rather than always the int32 one; this is possible now that
softfloat provides int16 versions and necessary for the
future conversion-to-int64 A64 variants. This also allows
us to drop the awkward 'sign' macro argument.
* split the 'fsz' argument which currently controls both
width of the input float type and width of the output
integer type into two; this will allow us to specify the
A64 64-bit-int-to-single conversion function, where the
two widths are different.
We can also drop the (itype##_t) cast now that softfloat
guarantees that all the itype##_to_float* functions take
an integer argument of exactly the correct type.
Signed-off-by: Will Newton <will.newton@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
IEEE754-2008 specifies a new rounding mode:
"roundTiesToAway: the floating-point number nearest to the infinitely
precise result shall be delivered; if the two nearest floating-point
numbers bracketing an unrepresentable infinitely precise result are
equally near, the one with larger magnitude shall be delivered."
Implement this new mode (it is needed for ARM). The general principle
is that the required code is exactly like the ties-to-even code,
except that we do not need to do the "in case of exact tie clear LSB
to round-to-even", because the rounding operation naturally causes
the exact tie to round up in magnitude.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Refactor the code in various functions which calculates rounding
increments given the current rounding mode, so that instead of a
set of nested if statements we have a simple switch statement.
This will give us a clean place to add the case for the new
tiesAway rounding mode.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Add the conversion functions float16_to_float64() and
float64_to_float16(), which will be needed for the ARM
A64 instruction set.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
In preparation for adding conversions between float16 and float64,
factor out code currently done inline in the float16<=>float32
conversion functions into functions RoundAndPackFloat16 and
NormalizeFloat16Subnormal along the lines of the existing versions
for the other float types.
Note that we change the handling of zExp from the inline code
to match the API of the other RoundAndPackFloat functions; however
we leave the positioning of the binary point between bits 22 and 23
rather than shifting it up to the high end of the word.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Tidy up the get/set accessors for the fp state to add missing ones
and make them all inline in softfloat.h rather than some inline and
some not.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
The float64_to_uint32_round_to_zero routine is incorrect.
For example, the following test pattern:
425F81378DC0CD1F / 0x1.f81378dc0cd1fp+38
will erroneously set the inexact flag.
This patch re-implements the routine to use the float64_to_uint64_round_to_zero
routine. If saturation occurs we ignore any flags set by the
conversion function and raise only Invalid.
This contribution can be licensed under either the softfloat-2a or -2b
license.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Message-id: 1387397961-4894-6-git-send-email-tommusta@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
The float64_to_uint32 has several flaws:
- for numbers between 2**32 and 2**64, the inexact exception flag
may get incorrectly set. In this case, only the invalid flag
should be set.
test pattern: 425F81378DC0CD1F / 0x1.f81378dc0cd1fp+38
- for numbers between 2**63 and 2**64, incorrect results may
be produced:
test pattern: 43EAAF73F1F0B8BD / 0x1.aaf73f1f0b8bdp+63
This patch re-implements float64_to_uint32 to re-use the
float64_to_uint64 routine (instead of float64_to_int64). For the
saturation case, we ignore any flags which the conversion routine
has set and raise only the invalid flag.
This contribution can be licensed under either the softfloat-2a or -2b
license.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Message-id: 1387397961-4894-5-git-send-email-tommusta@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
The float64_to_uint64_round_to_zero routine is incorrect.
For example, the following test pattern:
46697351FF4AEC29 / 0x1.97351ff4aec29p+103
currently produces 8000000000000000 instead of FFFFFFFFFFFFFFFF.
This patch re-implements the routine to temporarily force the
rounding mode and use the float64_to_uint64 routine.
This contribution can be licensed under either the softfloat-2a or -2b
license.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Message-id: 1387397961-4894-4-git-send-email-tommusta@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This patch adds the float32_to_uint64() routine, which converts a
32-bit floating point number to an unsigned 64 bit number.
This contribution can be licensed under either the softfloat-2a or -2b
license.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: removed harmless but silly int64_t casts]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
If the input to float*_scalbn() is denormal then it represents
a number 0.[mantissabits] * 2^(1-exponentbias) (and the actual
exponent field is all zeroes). This means that when we convert
it to our unpacked encoding the unpacked exponent must be one
greater than for a normal number, which represents
1.[mantissabits] * 2^(e-exponentbias) for an exponent field e.
This meant we were giving answers too small by a factor of 2 for
all denormal inputs.
Note that the float-to-int routines also have this behaviour
of not adjusting the exponent for denormals; however there it is
harmless because denormals will all convert to integer zero anyway.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
We implement a number of float-to-integer conversions using conversion
to an integer type with a wider range and then a check against the
narrower range we are actually converting to. If we find the result to
be out of range we correctly raise the Invalid exception, but we must
also suppress other exceptions which might have been raised by the
conversion function we called.
This won't throw away exceptions we should have preserved, because for
the 'core' exception flags the IEEE spec mandates that the only valid
combinations of exception that can be raised by a single operation are
Inexact + Overflow and Inexact + Underflow. For the non-IEEE softfloat
flag for input denormals, we can guarantee that that flag won't have
been set for out of range float-to-int conversions because a squashed
denormal by definition goes to plus or minus zero, which is always in
range after conversion to integer zero.
This bug has been fixed for some of the float-to-int conversion routines
by previous patches; fix it for the remaining functions as well, so
that they all restore the pre-conversion status flags prior to raising
Invalid.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
The comment preceding the float64_to_uint64 routine suggests that
the implementation is broken. And this is, indeed, the case.
This patch properly implements the conversion of a 64-bit floating
point number to an unsigned, 64 bit integer.
This contribution can be licensed under either the softfloat-2a or -2b
license.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Currently the int-to-float functions take types which are specified
as "at least X bits wide", rather than "exactly X bits wide". This is
confusing and unhelpful since it means that the callers have to include
an explicit cast to [u]intXX_t to ensure the correct behaviour. Fix
them all to take the exactly-X-bits-wide types instead.
Note that this doesn't change behaviour at all since at the moment
we happen to define the 'int32' and 'uint32' types as exactly 32 bits
wide, and the 'int64' and 'uint64' types as exactly 64 bits wide.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Add the float to 16 bit integer conversion routines. These can be
trivially implemented in terms of the int32_to_float* routines, but
providing them makes our API more symmetrical and can simplify callers.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
ARMv8 requires support for converting 32 and 64bit floating point
values to signed and unsigned 16bit integers.
Signed-off-by: Will Newton <will.newton@linaro.org>
[PMM: updated not to incorrectly set Inexact for Invalid inputs]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Our float32 to float16 conversion routine was generating the correct
numerical answers, but not always setting the right set of exception
flags. Fix this, mostly by rearranging the code to more closely
resemble RoundAndPackFloat*, and in particular:
* non-IEEE halfprec always raises Invalid for input NaNs
* we need to check for the overflow case before underflow
* we weren't getting the tininess-detected-after-rounding
case correct (somewhat academic since only ARM uses halfprec
and it is always tininess-detected-before-rounding)
* non-IEEE halfprec overflow raises only Invalid, not
Invalid + Inexact
* we weren't setting Inexact when we should
Also add some clarifying comments about what the code is doing.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
To make the code slightly cleaner to look at and make the save/restore
code easier to understand, introduce this function to set the priority of
interrupts.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Message-id: 1387606179-22709-3-git-send-email-christoffer.dall@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
TRIGGER can really mean mean anything (e.g. was it triggered, is it
level-triggered, is it edge-triggered, etc.). Rename to EDGE_TRIGGER to
make the code comprehensible without looking up the data structure.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Message-id: 1387606179-22709-2-git-send-email-christoffer.dall@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
commit 5ce4f35781
"target-arm: A64: add set_pc cpu method"
introduces an array aarch64_cpus which is zero
size if this code is built without CONFIG_USER_ONLY.
In particular an attempt to iterate over this array produces a warning
under gcc 4.8.2:
CC aarch64-softmmu/target-arm/cpu64.o
/scm/qemu/target-arm/cpu64.c: In function ‘aarch64_cpu_register_types’:
/scm/qemu/target-arm/cpu64.c:124:5: error: comparison of unsigned
expression < 0 is always false [-Werror=type-limits]
for (i = 0; i < ARRAY_SIZE(aarch64_cpus); i++) {
^
cc1: all warnings being treated as errors
This is the result of ARRAY_SIZE being an unsigned type,
causing "i" to be promoted to unsigned int as well.
As zero size arrays are a gcc extension, it seems
cleanest to add a dummy element with NULL name,
and test for it during registration.
We'll be able to drop this when we add more CPUs.
Cc: Alexander Graf <agraf@suse.de>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20131223145216.GA22663@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Don't conditionalise GEM instantiation on networking attachments. The
device should always be present even if not attached to a network.
This allows for probing of the device by expectant guests (such as
OS's). This is needed because sysbus (or AXI in Xilinx's real hw case)
is not self identifying so the guest has no dynamic way of detecting
device absence.
Also allows for testing of the GEM in loopback mode with -net none.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 55649779a68ee3ff54b24c339b6fdbdccd1f0ed7.1388800598.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>