Implement the scalar three different instruction group:
it only has three instructions in it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Implement the SIMD scalar indexed instructions. The encoding
here is nearly identical to the vector indexed grouping, so
we combine the two.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Implement the 'long' operations in the vector x indexed
element category.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Implement all the SIMD vector x indexed element instructions
in the subcategory which are not 'long' ops.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Use libvixl to implement disassembly output in debug
logs for A64, for use with both AArch64 hosts and targets.
Signed-off-by: Claudio Fontana <claudio.fontana@linaro.org>
[PMM:
* added support for target disassembly
* switched to custom QEMUDisassembler so the output format
matches what QEMU expects
* make sure we correctly fall back to "just print hex"
if we didn't build the AArch64 disassembler because of
lack of a C++ compiler
* rename from 'aarch64' to 'arm-a64' because this is a
disassembler for the A64 instruction set
* merge aarch64.c and aarch64-cxx.cc into one C++ file
* simplify the aarch64.c<->aarch64-cxx.cc interface]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add the SIMD FNEG and FABS instructions in the SIMD 2-reg-misc group.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Add the byte-reverse operations REV64, REV32 and REV16 from the
two-reg-misc group.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Add the narrowing integer instructions in the 2-reg-misc class.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Implement the 2-reg-misc CNT, NOT and RBIT instructions.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Implement the simple 2-register-misc operations we can share
with the scalar-two-register-misc code. (SUQADD, USQADD, SQABS,
SQNEG also fall into this category, but aren't implemented in
the scalar-2-register case yet either.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Add a skeleton decode for the SIMD 2-reg misc group.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Implement the simple 64 bit integer operations from the SIMD
scalar 2-register misc group (C3.6.12): the comparisons against
zero, plus ABS and NEG.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Implement the instructions in the scalar pairwise group (C3.6.8).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Implement the pairwise integer operations in the 3-reg-same SIMD group:
ADDP, SMAXP, SMINP, UMAXP and UMINP.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Implement the SIMD 3-reg-same instructions where the size == 3 case
is reserved: SHADD, UHADD, SRHADD, URHADD, SHSUB, UHSUB, SMAX,
UMAX, SMIN, UMIN, SABD, UABD, SABA, UABA, MLA, MLS, MUL, PMUL,
SQRDMULH, SQDMULH. (None of these have scalar-3-same versions.)
This completes the non-pairwise integer instructions in this category.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Implement the SIMD 3-reg-same instructions SQADD, UQADD,
SQSUB, UQSUB, SSHL, USHL, SQSHl, UQSHL, SRSHL, URSHL,
SQRSHL, UQRSHL; these are all simple calls to existing
Neon helpers. We also enable SSHL, USHL, SRSHL and URSHL
for the 3-reg-same-scalar category (but not the others
because they can have non-size-64 operands and the
scalar_3reg_same function doesn't support that yet.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This implements a subset of the AdvSIMD shift operations (namely all the
none saturating or narrowing ones). The actual shift generation code
itself is common for both the scalar and vector cases but wrapped with
either vector element iteration or the fp reg access.
The rounding operations need to take special care to correctly reflect
the result of adding rounding bits on high bits as the intermediates do
not truncate.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Implement a simple subset of the SIMD 3-same floating point
operations. This includes a common helper function used for both
scalar and vector ops; FABD is the only currently implemented
shared op.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Add some of the integer operations in the SIMD 3-same group:
specifically, the comparisons, addition and subtraction.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Add support for the logical operations (ORR, AND, BIC, ORN, EOR, BSL,
BIT and BIF) from the SIMD 3 register same group (C3.6.16).
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Add top level decode for the A64 SIMD three regs same group
(C3.6.16), splitting it into the pairwise, logical, float and
integer subgroups.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Implement the add, sub and compare ops from the SIMD "scalar three same"
group.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Implement the absolute-difference instructions in the SIMD
three-different group: SABAL, SABAL2, UABAL, UABAL2, SABDL,
SABDL2, UABDL, UABDL2.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Add support for the multiply-accumulate instructions from the
SIMD three-different instructions group (C3.6.15):
* skeleton decode of unallocated encodings and split of
the group into its three sub-parts
* framework for handling the 64x64->128 widening subpart
* implementation of the multiply-accumulate instructions
SMLAL, SMLAL2, UMLAL, UMLAL2, SMLSL, SMLSL2, UMLSL, UMLSL2,
UMULL, UMULL2, SMULL, SMULL2
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This function will be needed for AArch32 ARMv8 support, so move it to
helper.c where it can be used by both targets. Also moves the code out
of line, but as it is quite a large function I don't believe this
should be a significant performance impact.
Signed-off-by: Will Newton <will.newton@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add support for the SIMD scalar copy instruction group (C3.6.7),
which consists of the single instruction DUP (element, scalar).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This patch adds support for the AdvSIMD modified immediate group
(C3.6.6) with all its suboperations (movi, orr, fmov, mvni, bic).
Signed-off-by: Alexander Graf <agraf@suse.de>
[AJB: new decode struct, minor bug fixes, optimisation]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This adds support for the all the AdvSIMD vector copy operations
(ARM ARM 3.6.5).
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Add support for the SIMD "across lanes" instruction group (C3.6.4).
Signed-off-by: Michael Matz <matz@suse.de>
[PMM: Updated to current codebase, added fp min/max ops,
added unallocated encoding checks]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Add support for the SIMD ZIP/UZIP/TRN instruction group
(C3.6.3).
Signed-off-by: Michael Matz <matz@suse.de>
[PMM: use new do_vec_get/set etc functions and generally update to new
codebase standards; refactor to pull per-element loop outside switch]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Add support for the SIMD TBL/TBLX instructions (group C3.6.2).
Signed-off-by: Michael Matz <matz@suse.de>
[PMM: rewritten to do more of the decode in translate-a64.c,
and to do only one 64 bit pass at a time in the helper]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Add support for the SIMD EXT instruction (the only one in its
group, C3.6.1).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Add decode skeleton and function placeholders for all the SIMD data
processing instructions. Due to the complexity of this part of the
table the normal extract and switch approach gets very messy very
quickly, so we use a simple data-driven pattern-and-mask approach.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Implement the SIMD ld/st single structure instructions.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This adds support support for the SIMD load/store
multiple category of instructions.
This also brings in a couple of helper functions for manipulating
sections of the SIMD registers:
* do_vec_get - fetch value from a slice of a vector register
* do_vec_set - set a slice of a vector register
which use vec_reg_offset for consistent processing of offsets in an
endian aware manner. There are also additional helpers:
* do_vec_ld - load value into SIMD
* do_vec_st - store value from SIMD
which load or store a slice of a vector register to memory.
These don't zero extend like the fp variants.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Add support for FCVT between half, single and double precision.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This patch adds support for those instructions in the "Floating-point
data-processing (1 source)" group which are simple 32-bit-to-32-bit
or 64-bit-to-64-bit operations (ie everything except FCVT between
single/double/half precision).
We put the new round-to-int helpers in helper.c because they will
also be used by the new ARMv8 A32/T32 rounding instructions.
Signed-off-by: Alexander Graf <agraf@suse.de>
[WN: Commit message tweak, merged single and double precision patches,
updated to new infrastructure.]
Signed-off-by: Will Newton <will.newton@linaro.org>
[PMM: reworked decode, split FCVT out into their own patch]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Add support for the AArch64 floating-point <-> integer conversion
instructions to disas_fpintconv. In the process we can rearrange
and simplify the detection of unallocated encodings a little.
We also correct a typo in the instruction encoding diagram for this
instruction group: bit 21 is 1, not 0.
Signed-off-by: Will Newton <will.newton@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This patch adds emulation for the instruction group labeled
"Floating-point <-> fixed-point conversions" in the ARM ARM.
Namely this includes the instructions SCVTF, UCVTF, FCVTZS, FCVTZU
(scalar, fixed-point).
Signed-off-by: Alexander Graf <agraf@suse.de>
[WN: Commit message tweak, rebased, updated to new infrastructure.
Applied bug fixes from Michael Matz and Janne Grunau.]
Signed-off-by: Will Newton <will.newton@linaro.org>
[PMM: significant cleanup]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This adds decoding support for C3.6.24 FP conditional select.
Signed-off-by: Claudio Fontana <claudio.fontana@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This adds decoding support for C3.6.23 FP Conditional Compare.
Signed-off-by: Claudio Fontana <claudio.fontana@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Add decoding support for C3.6.22 Floating-point compare.
Signed-off-by: Claudio Fontana <claudio.fontana@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This patch adds emulation for the fmov instruction working on scalars
with an immediate payload.
Signed-off-by: Alexander Graf <agraf@suse.de>
[WN: Commit message tweak, rebase and use new infrastructure.]
Signed-off-by: Will Newton <will.newton@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This patch adds emulation for the "Floating-point data-processing (3 source)"
group of instructions.
Signed-off-by: Alexander Graf <agraf@suse.de>
[WN: Commit message tweak, merged single and double precision patches.
Implement using muladd as suggested by Richard Henderson.]
Signed-off-by: Will Newton <will.newton@linaro.org>
[PMM: pull field decode up a level, use register accessors]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This patch adds emulation for the "Floating-point data-processing (2 source)"
group of instructions.
Signed-off-by: Alexander Graf <agraf@suse.de>
[WN: Commit message tweak, merge single and double precision patches. Rebase
and update to new infrastructure. Incorporate FMIN/FMAX support patch by
Michael Matz.]
Signed-off-by: Will Newton <will.newton@linaro.org>
[PMM:
* added convenience accessors for FP s and d regs
* pulled the field decode and opcode validity check up a level]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
The A64 128 bit vector registers are stored as a pair of
uint64_t values in the register array. This means that if
we're directly loading or storing a value of size less than
64 bits we must adjust the offset appropriately to account
for whether the host is bigendian or not. Provide utility
functions to abstract away the offsetof() calculations for
the FP registers.
For do_fp_st() we can sidestep most of the issues for 64 bit
and smaller reg-to-mem transfers by always doing a 64 bit
load from the register and writing just the piece we need
to memory.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
When dumping the current CPU state, we can also get a request
to dump the FPU state along with the CPU's integer state.
Add support to dump the VFP state when that flag is set, so that
we can properly debug code that modifies floating point registers.
Signed-off-by: Alexander Graf <agraf@suse.de>
[WN: Commit message tweak, rebased. Output all registers, two per-line.]
Signed-off-by: Will Newton <will.newton@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This implement exclusive loads/stores for aarch64 along the lines of
arm32 and ppc implementations. The exclusive load remembers the address
and loaded value. The exclusive store throws an an exception which uses
those values to check for equality in a proper exclusive region.
This is not actually the architecture mandated semantics (for either
AArch32 or AArch64) but it is close enough for typical guest code
sequences to work correctly, and saves us from having to monitor all
guest stores. It's fairly easy to come up with test cases where we
don't behave like hardware - we don't for example model cache line
behaviour. However in the common patterns this works, and the existing
32 bit ARM exclusive access implementation has the same limitations.
AArch64 also implements new acquire/release loads/stores (which may be
either exclusive or non-exclusive). These imposes extra ordering
constraints on memory operations (ie they act as if they have an implicit
barrier built into them). As TCG is single-threaded all our barriers
are no-ops, so these just behave like normal loads and stores.
Signed-off-by: Michael Matz <matz@suse.de>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Adds support for Load Register (literal), both normal
and SIMD/FP forms.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>