Commit Graph

73 Commits

Author SHA1 Message Date
Luis Pires
a3d67f3e5d target/ppc: fix setting of CR flags in bcdcfsq
According to the ISA, CR should be set based on the source value, and
not on the packed decimal result.
The way this was implemented would cause GT, LT and EQ to be set
incorrectly when the source value was too large and the 31 least
significant digits of the packed decimal result ended up being all zero.
This would happen for source values of +/-10^31, +/-10^32, etc.

The new implementation fixes this and also skips the result calculation
altogether in case of src overflow.

Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Message-Id: <20210823150235.35759-1-luis.pires@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-09-29 19:37:39 +10:00
Matheus Ferst
f297c4c605 target/ppc: fix vextu[bhw][lr]x helpers
These helpers shouldn't depend on the host endianness, as they only use
shifts, ands, and int128_* methods.

Fixes: 60caf2216b ("target-ppc: add vextu[bhw][lr]x instructions")
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20210826141446.2488609-3-matheus.ferst@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-08-27 12:41:47 +10:00
Matheus Ferst
89ccd7dc3f target/ppc: Implement cfuged instruction
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20210601193528.2533031-12-matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-06-03 18:10:31 +10:00
Bruno Larsen (billionai)
c19940db0f target/ppc: created ppc_{store,get}_vscr for generic vscr usage
Some functions unrelated to TCG use helper_m{t,f}vscr, so generic versions
of those functions were added to cpu.c, in preparation for compilation
without TCG

Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Message-Id: <20210512140813.112884-2-bruno.larsen@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-05-19 10:30:28 +10:00
Richard Henderson
8a05fd9a22 target/ppc: Move helper_regs.h functions out-of-line
Move the functions to a new file, helper_regs.c.

Note int_helper.c was relying on helper_regs.h to
indirectly include qemu/log.h.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210315184615.1985590-2-richard.henderson@linaro.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-05-04 11:41:24 +10:00
Fabiano Rosas
936fda4d77 target/ppc: Fix bcdsub. emulation when result overflows
The commit d03b174a83 (target/ppc: simplify bcdadd/sub functions)
meant to simplify some of the code but it inadvertently altered the
way the CR6 field is set after the operation has overflowed.

The CR6 bits are set based on the *unbounded* result of the operation,
so we need to look at the result before returning from bcd_add_mag,
otherwise we will look at 0 when it overflows.

Consider the following subtraction:

v0 = 0x9999999999999999999999999999999c (maximum positive BCD value)
v1 = 0x0000000000000000000000000000001d (negative one BCD value)
bcdsub. v0,v0,v1,0

The Power ISA 2.07B says:
If the unbounded result is greater than zero, do the following.
  If PS=0, the sign code of the result is set to 0b1100.
  If PS=1, the sign code of the result is set to 0b1111.
  If the operation overflows, CR field 6 is set to 0b0101. Otherwise,
  CR field 6 is set to 0b0100.

POWER9 hardware:
vr0 = 0x0000000000000000000000000000000c (positive zero BCD value)
cr6 = 0b0101 (0x5) (positive, overflow)

QEMU:
vr0 = 0x0000000000000000000000000000000c (positive zero BCD value)
cr6 = 0b0011 (0x3) (zero, overflow) <--- wrong

This patch reverts the part of d03b174a83 that introduced the
problem and adds a test-case to avoid further regressions:

before:
$ make run-tcg-tests-ppc64le-linux-user
(...)
  TEST    bcdsub on ppc64le
bcdsub: qemu/tests/tcg/ppc64le/bcdsub.c:58: test_bcdsub_gt:
Assertion `(cr >> 4) == ((1 << 2) | (1 << 0))' failed.

Fixes: d03b174a83 (target/ppc: simplify bcdadd/sub functions)
Reported-by: Paul Clarke <pc@us.ibm.com>
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20210222194035.2723056-1-farosas@linux.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-03-10 09:07:09 +11:00
Chetan Pant
6bd039cdbe powerpc tcg: Fix Lesser GPL version number
There is no "version 2" of the "Lesser" General Public License.
It is either "GPL version 2.0" or "Lesser GPL version 2.1".
This patch replaces all occurrences of "Lesser GPL version 2" with
"Lesser GPL version 2.1" in comment section.

Signed-off-by: Chetan Pant <chetan4windows@gmail.com>
Message-Id: <20201019061126.3102-1-chetan4windows@gmail.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2020-11-15 16:38:50 +01:00
Peter Maydell
dd8014e4e9 ppc patch queue 2020-08-18
Here's my first pull request for qemu-5.2, which has quite a few
 accumulated things.  Highlights are:
 
  * Preliminary support for POWER10 (Power ISA 3.1) instruction emulation
  * Add documentation on the (very confusing) pseries NUMA configuration
  * Fix some bugs handling edge cases with XICS, XIVE and kernel_irqchip
  * Fix icount for a number of POWER registers
  * Many cleanups to error handling in XIVE code
  * Validate size of -prom-env data
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAl87VpwACgkQbDjKyiDZ
 s5LjIxAAs8YAQe3uDRz1Wb9GftoMmEHdq7JQoO0FbXDQIVXzpTAXmFLSBtCWKl6p
 O1MEIy/o48b5ORXJqSDSA5LgxbHxYfHdIPEY5Tbn/TGvTvKyCukx9n11milUG8In
 JxRrOTQBnQAAHkLoyuZyrWKOauC0N1scNrnX9Geuid13GcmqHg1d2alXAUu8jEeC
 HSiVmtMqqyyqTx2xA4vfhaGuuwTthnKNfbGdg9ksVqBsCW+etn6ZKGImt8hBe3qO
 5iqbQZvFbkpzgbjkhDzUDM6tmUAFN55y/Y+y7I8Tz4/IX7d3WbdqpplwrXXVWkpq
 2gcBBjQ/9a1hPTBRVN9jn4CvHfhILBfeHIElUiLpSTQZQQALymTnnI2pLCgKoEFX
 LcchXbjiX+pZ2OJnAijpwBcknjgT2U/ZNyiqHJfSQ6jzlYx1YtUf4xGUsgloSiK8
 9QDK8o2k0Cm8Be+lPMBMmTctoi8bq+8SN5UUF710WQL235J58o9+z1vuGO2HVk3x
 flBtv/+B890wcCDpGU80DPs/LSzR0xTTbA5JsWft2fvO569mda0MoWkJH5w6jvSc
 ZLYqljCzFCVW+tKiGHzaBalJaMwn0+QMDTsxzP3yTt5LmmEeRXpBELgvrW64IobD
 xBeryH3nG4SwxFSJq+4ATfvUzjy/Eo58lTTl6c53Ji8/D3aFwsA=
 =L9Wi
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-5.2-20200818' into staging

ppc patch queue 2020-08-18

Here's my first pull request for qemu-5.2, which has quite a few
accumulated things.  Highlights are:

 * Preliminary support for POWER10 (Power ISA 3.1) instruction emulation
 * Add documentation on the (very confusing) pseries NUMA configuration
 * Fix some bugs handling edge cases with XICS, XIVE and kernel_irqchip
 * Fix icount for a number of POWER registers
 * Many cleanups to error handling in XIVE code
 * Validate size of -prom-env data

# gpg: Signature made Tue 18 Aug 2020 05:18:36 BST
# gpg:                using RSA key 75F46586AE61A66CC44E87DC6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>" [full]
# gpg:                 aka "David Gibson (Red Hat) <dgibson@redhat.com>" [full]
# gpg:                 aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>" [full]
# gpg:                 aka "David Gibson (kernel.org) <dwg@kernel.org>" [unknown]
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E  87DC 6C38 CACA 20D9 B392

* remotes/dgibson/tags/ppc-for-5.2-20200818: (40 commits)
  spapr/xive: Use xive_source_esb_len()
  nvram: Exit QEMU if NVRAM cannot contain all -prom-env data
  spapr/xive: Simplify error handling of kvmppc_xive_cpu_synchronize_state()
  ppc/xive: Simplify error handling in xive_tctx_realize()
  spapr/xive: Simplify error handling in kvmppc_xive_connect()
  ppc/xive: Fix error handling in vmstate_xive_tctx_*() callbacks
  spapr/xive: Fix error handling in kvmppc_xive_post_load()
  spapr/kvm: Fix error handling in kvmppc_xive_pre_save()
  spapr/xive: Rework error handling of kvmppc_xive_set_source_config()
  spapr/xive: Rework error handling in kvmppc_xive_get_queues()
  spapr/xive: Rework error handling of kvmppc_xive_[gs]et_queue_config()
  spapr/xive: Rework error handling of kvmppc_xive_cpu_[gs]et_state()
  spapr/xive: Rework error handling of kvmppc_xive_mmap()
  spapr/xive: Rework error handling of kvmppc_xive_source_reset()
  spapr/xive: Rework error handling of kvmppc_xive_cpu_connect()
  spapr: Simplify error handling in spapr_phb_realize()
  spapr/xive: Convert KVM device fd checks to assert()
  ppc/xive: Introduce dedicated kvm_irqchip_in_kernel() wrappers
  ppc/xive: Rework setup of XiveSource::esb_mmio
  target/ppc: Integrate icount to purr, vtb, and tbu40
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-08-24 09:35:21 +01:00
Paolo Bonzini
139c1837db meson: rename included C source files to .c.inc
With Makefiles that have automatically generated dependencies, you
generated includes are set as dependencies of the Makefile, so that they
are built before everything else and they are available when first
building the .c files.

Alternatively you can use a fine-grained dependency, e.g.

        target/arm/translate.o: target/arm/decode-neon-shared.inc.c

With Meson you have only one choice and it is a third option, namely
"build at the beginning of the corresponding target"; the way you
express it is to list the includes in the sources of that target.

The problem is that Meson decides if something is a source vs. a
generated include by looking at the extension: '.c', '.cc', '.m', '.C'
are sources, while everything else is considered an include---including
'.inc.c'.

Use '.c.inc' to avoid this, as it is consistent with our other convention
of using '.rst.inc' for included reStructuredText files.  The editorconfig
file is adjusted.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-08-21 06:18:30 -04:00
Lijun Pan
c4b8b49d68 target/ppc: add vmulh{su}d instructions
vmulhsd: Vector Multiply High Signed Doubleword
vmulhud: Vector Multiply High Unsigned Doubleword

Signed-off-by: Lijun Pan <ljp@linux.ibm.com>
Message-Id: <20200724045845.89976-5-ljp@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2020-08-12 13:16:27 +10:00
Lijun Pan
f3e0d864ab target/ppc: add vmulh{su}w instructions
vmulhsw: Vector Multiply High Signed Word
vmulhuw: Vector Multiply High Unsigned Word

Signed-off-by: Lijun Pan <ljp@linux.ibm.com>
Message-Id: <20200724045845.89976-4-ljp@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2020-08-12 13:16:27 +10:00
Lijun Pan
a285ffa680 target/ppc: convert vmuluwm to tcg_gen_gvec_mul
Convert the original implementation of vmuluwm to the more generic
tcg_gen_gvec_mul.

Signed-off-by: Lijun Pan <ljp@linux.ibm.com>
Message-Id: <20200701234344.91843-5-ljp@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2020-08-12 13:16:27 +10:00
Richard Henderson
3e114acc91 target/ppc: Use tcg_gen_gvec_rotlv
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-06-02 08:42:37 -07:00
Richard Henderson
71bfd65c5f softfloat: Name compare relation enum
Give the previously unnamed enum a typedef name.  Use it in the
prototypes of compare functions.  Use it to hold the results
of the compare functions.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-05-19 08:41:45 -07:00
Mark Cave-Ayland
428115c3a9 target/ppc: use Vsr macros in BCD helpers
This allows us to remove more endian-specific defines from int_helper.c.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-Id: <20190926204453.31837-1-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-10-04 19:08:21 +10:00
Stefan Brankovic
1872588ede target/ppc: Optimize emulation of vclzw instruction
Optimize Altivec instruction vclzw (Vector Count Leading Zeros Word).
This instruction counts the number of leading zeros of each word element
in source register and places result in the appropriate word element of
destination register.

Counting is to be performed in four iterations of for loop(one for each
word elemnt of source register vB). Every iteration consists of loading
appropriate word element from source register, counting leading zeros
with tcg_gen_clzi_i32, and saving the result in appropriate word element
of destination register.

Signed-off-by: Stefan Brankovic <stefan.brankovic@rt-rk.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <1563200574-11098-7-git-send-email-stefan.brankovic@rt-rk.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-08-21 17:17:11 +10:00
Stefan Brankovic
b8313f0d91 target/ppc: Optimize emulation of vclzd instruction
Optimize Altivec instruction vclzd (Vector Count Leading Zeros Doubleword).
This instruction counts the number of leading zeros of each doubleword element
in source register and places result in the appropriate doubleword element of
destination register.

Using tcg-s count leading zeros instruction two times(once for each
doubleword element of source register vB) and placing result in
appropriate doubleword element of destination register vD.

Signed-off-by: Stefan Brankovic <stefan.brankovic@rt-rk.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <1563200574-11098-6-git-send-email-stefan.brankovic@rt-rk.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-08-21 17:17:11 +10:00
Stefan Brankovic
083b3f012f target/ppc: Optimize emulation of vgbbd instruction
Optimize altivec instruction vgbbd (Vector Gather Bits by Bytes by Doubleword)
All ith bits (i in range 1 to 8) of each byte of doubleword element in
source register are concatenated and placed into ith byte of appropriate
doubleword element in destination register.

Following solution is done for both doubleword elements of source register
in parallel, in order to reduce the number of instructions needed(that's why
arrays are used):
First, both doubleword elements of source register vB are placed in
appropriate element of array avr. Bits are gathered in 2x8 iterations(2 for
loops). In first iteration bit 1 of byte 1, bit 2 of byte 2,... bit 8 of
byte 8 are in their final spots so avr[i], i={0,1} can be and-ed with
tcg_mask. For every following iteration, both avr[i] and tcg_mask variables
have to be shifted right for 7 and 8 places, respectively, in order to get
bit 1 of byte 2, bit 2 of byte 3.. bit 7 of byte 8 in their final spots so
shifted avr values(saved in tmp) can be and-ed with new value of tcg_mask...
After first 8 iteration(first loop), all the first bits are in their final
places, all second bits but second bit from eight byte are in their places...
only 1 eight bit from eight byte is in it's place). In second loop we do all
operations symmetrically, in order to get other half of bits in their final
spots. Results for first and second doubleword elements are saved in
result[0] and result[1] respectively. In the end those results are saved in
appropriate doubleword element of destination register vD.

Signed-off-by: Stefan Brankovic <stefan.brankovic@rt-rk.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <1563200574-11098-5-git-send-email-stefan.brankovic@rt-rk.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-08-21 17:17:11 +10:00
Stefan Brankovic
4e6d0920e7 target/ppc: Optimize emulation of vsl and vsr instructions
Optimization of altivec instructions vsl and vsr(Vector Shift Left/Rigt).
Perform shift operation (left and right respectively) on 128 bit value of
register vA by value specified in bits 125-127 of register vB. Lowest 3
bits in each byte element of register vB must be identical or result is
undefined.

For vsl instruction, the first step is bits 125-127 of register vB have
to be saved in variable sh. Then, the highest sh bits of the lower
doubleword element of register vA are saved in variable shifted,
in order not to lose those bits when shift operation is performed on
the lower doubleword element of register vA, which is the next
step. After shifting the lower doubleword element shift operation
is performed on higher doubleword element of vA, with replacement of
the lowest sh bits(that are now 0) with bits saved in shifted.

For vsr instruction, firstly, the bits 125-127 of register vB have
to be saved in variable sh. Then, the lowest sh bits of the higher
doubleword element of register vA are saved in variable shifted,
in odred not to lose those bits when the shift operation is
performed on the higher doubleword element of register vA, which is
the next step. After shifting higher doubleword element, shift operation
is performed on lower doubleword element of vA, with replacement of
highest sh bits(that are now 0) with bits saved in shifted.

Signed-off-by: Stefan Brankovic <stefan.brankovic@rt-rk.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <1563200574-11098-3-git-send-email-stefan.brankovic@rt-rk.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-08-21 17:17:11 +10:00
Stefan Brankovic
1cc792698e target/ppc: Optimize emulation of lvsl and lvsr instructions
Adding simple macro that is calling tcg implementation of appropriate
instruction if altivec support is active.

Optimization of altivec instruction lvsl (Load Vector for Shift Left).
Place bytes sh:sh+15 of value 0x00 || 0x01 || 0x02 || ... || 0x1E || 0x1F
in destination register. Sh is calculated by adding 2 source registers and
getting bits 60-63 of result.

First, the bits [28-31] are placed from EA to variable sh. After that,
the bytes are created in the following way:
sh:(sh+7) of X(from description) by multiplying sh with 0x0101010101010101
followed by addition of the result with 0x0001020304050607. Value obtained
is placed in higher doubleword element of vD.
(sh+8):(sh+15) by adding the result of previous multiplication with
0x08090a0b0c0d0e0f. Value obtained is placed in lower doubleword element
of vD.

Optimization of altivec instruction lvsr (Load Vector for Shift Right).
Place bytes 16-sh:31-sh of value 0x00 || 0x01 || 0x02 || ... || 0x1E ||
0x1F in destination register. Sh is calculated by adding 2 source
registers and getting bits 60-63 of result.

First, the bits [28-31] are placed from EA to variable sh. After that,
the bytes are created in the following way:
sh:(sh+7) of X(from description) by multiplying sh with 0x0101010101010101
followed by substraction of the result from 0x1011121314151617. Value
obtained is placed in higher doubleword element of vD.
(sh+8):(sh+15) by substracting the result of previous multiplication from
0x18191a1b1c1d1e1f. Value obtained is placed in lower doubleword element
of vD.

Signed-off-by: Stefan Brankovic <stefan.brankovic@rt-rk.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <1563200574-11098-2-git-send-email-stefan.brankovic@rt-rk.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-08-21 17:17:11 +10:00
Markus Armbruster
db72581598 Include qemu/main-loop.h less
In my "build everything" tree, changing qemu/main-loop.h triggers a
recompile of some 5600 out of 6600 objects (not counting tests and
objects that don't depend on qemu/osdep.h).  It includes block/aio.h,
which in turn includes qemu/event_notifier.h, qemu/notify.h,
qemu/processor.h, qemu/qsp.h, qemu/queue.h, qemu/thread-posix.h,
qemu/thread.h, qemu/timer.h, and a few more.

Include qemu/main-loop.h only where it's needed.  Touching it now
recompiles only some 1700 objects.  For block/aio.h and
qemu/event_notifier.h, these numbers drop from 5600 to 2800.  For the
others, they shrink only slightly.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20190812052359.30071-21-armbru@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
2019-08-16 13:31:52 +02:00
Mark Cave-Ayland
5ba5335d93 target/ppc: decode target register in VSX_EXTRACT_INSERT at translation time
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190616123751.781-15-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-07-02 09:43:58 +10:00
Mark Cave-Ayland
03b32c092e target/ppc: remove getVSR()/putVSR() from int_helper.c
Since commit 8a14d31b00 "target/ppc: switch fpr/vsrl registers so all VSX
registers are in host endian order" functions getVSR() and putVSR() which used
to convert the VSR registers into host endian order are no longer required.

Now that there are now no more users of getVSR()/putVSR() these functions can
be completely removed.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190616123751.781-4-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-07-02 09:43:58 +10:00
Richard Henderson
571fbe6ccd target/ppc: Use vector variable shifts for VSL, VSR, VSRA
The gvec expanders take care of masking the shift amount
against the element width.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190518191430.21686-2-richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-05-29 11:39:45 +10:00
Anton Blanchard
7fa0ddc1d6 target/ppc: Fix vsum2sws
A recent cleanup changed the pre zeroing of the result from 64 bit
to 32 bit operations:

-        result.u64[i] = 0;
+        result.VsrW(i) = 0;

This corrupts the result.

Fixes: 60594fea29 ("target/ppc: remove various HOST_WORDS_BIGENDIAN hacks in int_helper.c")
Signed-off-by: Anton Blanchard <anton@ozlabs.org>
Message-Id: <20190507004811.29968-9-anton@ozlabs.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-05-29 11:39:44 +10:00
Anton Blanchard
63be02fc69 target/ppc: Fix vslv and vsrv
vslv and vsrv are broken on little endian, we append 00 to the
high byte not the low byte. Fix it by using the VsrB() accessor.

Signed-off-by: Anton Blanchard <anton@ozlabs.org>
Message-Id: <20190507004811.29968-6-anton@ozlabs.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-05-29 11:39:44 +10:00
Richard Henderson
3f74b6322c target/ppc: Use qemu_guest_getrandom for DARN
We now have an interface for guest visible random numbers.

Acked-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-22 12:38:54 -04:00
David Gibson
b6cb41b22c target/ppc: Style fixes for int_helper.c
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
2019-04-26 10:42:38 +10:00
Richard Henderson
73e14c6a9c target/ppc: convert vmin* and vmax* to vector operations
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-18-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-18 11:00:44 +11:00
Richard Henderson
fb11ae7daa target/ppc: convert vadd*s and vsub*s to vector operations
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-17-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-18 11:00:44 +11:00
Richard Henderson
9b5b74da0a target/ppc: Split out VSCR_SAT to a vector field
Change the representation of VSCR_SAT such that it is easy
to set from vector code.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-16-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-18 11:00:44 +11:00
Richard Henderson
6175f5a058 target/ppc: Add set_vscr_sat
This is required before changing the representation of the register.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-15-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-18 11:00:44 +11:00
Richard Henderson
cc2b90d725 target/ppc: Add helper_mfvscr
This is required before changing the representation of the register.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-13-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-18 11:00:44 +11:00
Richard Henderson
dedfaac74e target/ppc: Pass integer to helper_mtvscr
We can re-use this helper elsewhere if we're not passing
in an entire vector register.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-10-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-18 11:00:44 +11:00
Richard Henderson
0f6a6d5db8 target/ppc: convert vsplt[bhw] to use vector operations
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-5-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-18 11:00:44 +11:00
Richard Henderson
471ff3d025 target/ppc: convert vspltis[bhw] to use vector operations
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190215100058.20015-4-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-18 11:00:44 +11:00
Mark Cave-Ayland
3e942a1a80 target/ppc: convert vaddu[b,h,w,d] and vsubu[b,h,w,d] over to use vector operations
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-3-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-18 11:00:44 +11:00
Mark Cave-Ayland
60594fea29 target/ppc: remove various HOST_WORDS_BIGENDIAN hacks in int_helper.c
Following on from the previous work, there are numerous endian-related hacks
in int_helper.c that can now be replaced with Vsr* macros.

There are also a few places where the VECTOR_FOR_INORDER_I macro can be
replaced with a normal iterator since the processing order is irrelevant.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-04 18:44:20 +11:00
Mark Cave-Ayland
0ef83bf29e target/ppc: remove ROTRu32 and ROTRu64 macros from int_helper.c
Richard points out that these macros suffer from a -fsanitize=shift bug in that
they improperly handle n == 0 turning it into a shift by 32/64 respectively.
Replace them with QEMU's existing ror32() and ror64() functions instead.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-04 18:44:19 +11:00
Mark Cave-Ayland
634c583526 target/ppc: simplify VEXT_SIGNED macro in int_helper.c
As pointed out by Richard: it does not need the mask argument, nor does it need
the recast argument. The masking is implied by the cast argument, and the
recast is implied by the assignment.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-04 18:44:19 +11:00
Mark Cave-Ayland
730d2ca3f0 target/ppc: eliminate use of EL_IDX macros from int_helper.c
These macros can be eliminated by instead using the relavant Vsr* macros in
the few locations where they appear.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-04 18:44:19 +11:00
Mark Cave-Ayland
3c385a93b3 target/ppc: eliminate use of HI_IDX and LO_IDX macros from int_helper.c
The original purpose of these macros was to correctly reference the high and low
parts of the VSRs regardless of the host endianness.

Replace these direct references to high and low parts with the relevant VsrD
macro instead, and completely remove the now-unused HI_IDX and LO_IDX macros.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-04 18:44:19 +11:00
Mark Cave-Ayland
4fbc89edc1 target/ppc: rework vmul{e,o}{s,u}{b,h,w} instructions to use Vsr* macros
The current implementations make use of the endian-specific macros HI_IDX and
LO_IDX directly to calculate array offsets.

Rework the implementation to use the Vsr* macros so that these per-endian
references can be removed.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-04 18:44:19 +11:00
Mark Cave-Ayland
d81c20404e target/ppc: rework vmrg{l,h}{b,h,w} instructions to use Vsr* macros
The current implementations make use of the endian-specific macros MRGLO/MRGHI
and also reference HI_IDX and LO_IDX directly to calculate array offsets.

Rework the implementation to use the Vsr* macros so that these per-endian
references can be removed.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-04 18:44:19 +11:00
Mark Cave-Ayland
2dea57db60 target/ppc: replace AVR* macros with Vsr* macros
Now that the VMX and VSR register sets have been combined, the same macros can
be used to access both AVR and VSR field members.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-01-09 09:28:14 +11:00
Mark Cave-Ayland
05ee3e8aa0 target/ppc: merge ppc_vsr_t and ppc_avr_t union types
Since the VSX registers are actually a superset of the VMX registers then they
can be represented by the same type. Merge ppc_avr_t into ppc_vsr_t and change
ppc_avr_t to be a simple typedef alias.

Note that due to a difference in the naming of the float32 member between
ppc_avr_t and ppc_vsr_t, references to the ppc_avr_t f member must be replaced
with f32 instead.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-01-09 09:28:14 +11:00
Yasmin Beatriz
d03b174a83 target/ppc: simplify bcdadd/sub functions
After solving a corner case in bcdsub, this patch simplifies the logic
of both bcdadd/sub instructions by removing some unnecessary local flags.
This commit also rearranges some if-else conditions in bcdadd to make it
easier to read.

Signed-off-by: Yasmin Beatriz <yasmins@linux.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-08-21 14:28:45 +10:00
Yasmin Beatriz
56e0e961ec target/ppc: bcdsub fix sign when result is zero
When the result of bcdsub is equal to zero, the result sign may be
set to negative in some cases, and this does not follow the Power ISA
specifications as to decimal integer arithmetic instructions.

Signed-off-by: Yasmin Beatriz <yasmins@linux.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-08-21 14:28:45 +10:00
Laurent Vivier
4fff72185b target/ppc: fix build on ppc64 host
When I try to build a ppc64 target on a ppc64 host (gcc 8.1.1), I have:

.../target/ppc/int_helper.c: In function 'helper_vinsertb':
.../target/ppc/int_helper.c:1954:32: error: array subscript 18446744073709551608 is above array bounds of 'uint8_t[16]' {aka 'unsigned char[16]'} [-Werror=array-bounds]
         memmove(&r->u8[index], &b->u8[8 - sizeof(r->element)],              \
                                ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.../target/ppc/int_helper.c:1965:1: note: in expansion of macro 'VINSERT'

If we compare with the macro for ppc64le, we can see
sizeof(r->element[0]) should be used instead of sizeof(r->element).

And VINSERT uses only u8, u16, u32 and u64, so the maximum value
of sizeof(r->element[0]) is 8

Suggested-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-07 12:12:27 +10:00
Philippe Mathieu-Daudé
23c11b04dc target: Do not include "exec/exec-all.h" if it is not necessary
Code change produced with:
    $ git grep '#include "exec/exec-all.h"' | \
      cut -d: -f-1 | \
      xargs egrep -L "(cpu_address_space_init|cpu_loop_|tlb_|tb_|GETPC|singlestep|TranslationBlock)" | \
      xargs sed -i.bak '/#include "exec\/exec-all.h"/d'

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20180528232719.4721-10-f4bug@amsat.org>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-06-01 14:15:10 +02:00