Commit Graph

7359 Commits

Author SHA1 Message Date
Fabiano Rosas
d3841fce0d target/ppc: Fix compilation with FLUSH_ALL_TLBS debug option
../target/ppc/mmu_helper.c: In function 'helper_store_ibatu':
../target/ppc/mmu_helper.c:1802:17: error: unused variable 'cpu' [-Werror=unused-variable]
 1802 |     PowerPCCPU *cpu = env_archcpu(env);
      |                 ^~~
../target/ppc/mmu_helper.c: In function 'helper_store_dbatu':
../target/ppc/mmu_helper.c:1838:17: error: unused variable 'cpu' [-Werror=unused-variable]
 1838 |     PowerPCCPU *cpu = env_archcpu(env);
      |                 ^~~
../target/ppc/mmu_helper.c: In function 'helper_store_601_batu':
../target/ppc/mmu_helper.c:1874:17: error: unused variable 'cpu' [-Werror=unused-variable]
 1874 |     PowerPCCPU *cpu = env_archcpu(env);
      |                 ^~~
../target/ppc/mmu_helper.c: In function 'helper_store_601_batl':
../target/ppc/mmu_helper.c:1919:17: error: unused variable 'cpu' [-Werror=unused-variable]
 1919 |     PowerPCCPU *cpu = env_archcpu(env);

Fixes: db70b31144 ("target/ppc: Use env_cpu, env_archcpu")
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20210702215235.1941771-3-farosas@linux.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-07-09 10:38:19 +10:00
Fabiano Rosas
26ba91db6c target/ppc: Fix compilation with DUMP_PAGE_TABLES debug option
../target/ppc/mmu_helper.c: In function 'get_segment_6xx_tlb':
../target/ppc/mmu_helper.c:514:46: error: passing argument 1 of
'ppc_hash32_hpt_mask' from incompatible pointer type [-Werror=incompatible-pointer-types]

  514 |                          ppc_hash32_hpt_mask(env) + 0x80);
      |                                              ^~~
      |                                              |
      |                                              CPUPPCState *

Fixes: 36778660d7 ("target/ppc: Eliminate htab_base and htab_mask variables")
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20210702215235.1941771-2-farosas@linux.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-07-09 10:38:19 +10:00
Richard Henderson
cbf35bac39 target/ppc: Restrict ppc_cpu_tlb_fill to TCG
This function is used by TCGCPUOps, and is thus TCG specific.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210621125115.67717-10-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-07-09 10:38:19 +10:00
Richard Henderson
51806b5458 target/ppc: Introduce ppc_xlate
Create one common dispatch for all of the ppc_*_xlate functions.
Use ppc64_v3_radix to directly dispatch between ppc_radix64_xlate
and ppc_hash64_xlate.

Remove the separate *_handle_mmu_fault and *_get_phys_page_debug
functions, using common code for ppc_cpu_tlb_fill and
ppc_cpu_get_phys_page_debug.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210621125115.67717-9-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-07-09 10:38:19 +10:00
Richard Henderson
af44a14236 target/ppc: Split out ppc_jumbo_xlate
Mirror the interface of ppc_radix64_xlate (mostly), putting all
of the logic for older mmu translation into a single entry point.
For booke, we need to add mmu_idx to the xlate-style interface.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210621125115.67717-8-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-07-09 10:38:19 +10:00
Richard Henderson
6c3c873c63 target/ppc: Split out ppc_hash32_xlate
Mirror the interface of ppc_radix64_xlate, putting all of
the logic for hash32 translation into a single entry point.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210621125115.67717-7-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-07-09 10:38:19 +10:00
Richard Henderson
1a8c647bbd target/ppc: Split out ppc_hash64_xlate
Mirror the interface of ppc_radix64_xlate, putting all of
the logic for hash64 translation into a single function.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210621125115.67717-6-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-07-09 10:38:19 +10:00
Richard Henderson
077a370499 target/ppc: Use bool success for ppc_radix64_xlate
Instead of returning non-zero for failure, return true for success.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210621125115.67717-5-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-07-09 10:38:19 +10:00
Richard Henderson
42a611240e target/ppc: Push real-mode handling into ppc_radix64_xlate
This removes some incomplete duplication between
ppc_radix64_handle_mmu_fault and ppc_radix64_get_phys_page_debug.
The former was correct wrt SPR_HRMOR and the latter was not.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210621125115.67717-4-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-07-09 10:38:19 +10:00
Richard Henderson
1b4d1cb31a target/ppc: Use MMUAccessType with *_handle_mmu_fault
These changes were waiting until we didn't need to match
the function type of PowerPCCPUClass.handle_mmu_fault.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210621125115.67717-3-bruno.larsen@eldorado.org.br>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-07-09 10:38:18 +10:00
Richard Henderson
db20cc2c56 target/ppc: Remove PowerPCCPUClass.handle_mmu_fault
Instead, use a switch on env->mmu_model.  This avoids some
replicated information in cpu setup.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210621125115.67717-2-bruno.larsen@eldorado.org.br>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-07-09 10:38:18 +10:00
Greg Kurz
642f6f59cd target/ppc: Drop PowerPCCPUClass::interrupts_big_endian()
This isn't used anymore.

Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <20210622140926.677618-3-groug@kaod.org>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-07-09 10:38:18 +10:00
Greg Kurz
c11dc15d3a target/ppc: Introduce ppc_interrupts_little_endian()
PowerPC CPUs use big endian by default but starting with POWER7,
server grade CPUs use the ILE bit of the LPCR special purpose
register to decide on the endianness to use when handling
interrupts. This gives a clue to QEMU on the endianness the
guest kernel is running, which is needed when generating an
ELF dump of the guest or when delivering an FWNMI machine
check interrupt.

Commit 382d2db62b ("target-ppc: Introduce callback for interrupt
endianness") added a class method to PowerPCCPUClass to modelize
this : default implementation returns a fixed "big endian" value,
while POWER7 and newer do the LPCR_ILE check. This is suboptimal
as it forces to implement the method for every new CPU family, and
it is very unlikely that this will ever be different than what we
have today.

We basically only have three cases to consider:
a) CPU doesn't have an LPCR => big endian
b) CPU has an LPCR but doesn't support the ILE bit => big endian
c) CPU has an LPCR and supports the ILE bit => little or big endian

Instead of class methods, introduce an inline helper that checks the
ILE bit in the LPCR_MASK to decide on the outcome. The new helper
words little endian instead of big endian. This allows to drop a !
operator in ppc_cpu_do_fwnmi_machine_check().

Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <20210622140926.677618-2-groug@kaod.org>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-07-09 10:38:18 +10:00
David Edmondson
48e5c98a38 target/i386: Move X86XSaveArea into TCG
Given that TCG is now the only consumer of X86XSaveArea, move the
structure definition and associated offset declarations and checks to a
TCG specific header.

Signed-off-by: David Edmondson <david.edmondson@oracle.com>
Message-Id: <20210705104632.2902400-9-david.edmondson@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-06 08:33:51 +02:00
David Edmondson
fea4500841 target/i386: Populate x86_ext_save_areas offsets using cpuid where possible
Rather than relying on the X86XSaveArea structure definition,
determine the offset of XSAVE state areas using CPUID leaf 0xd where
possible (KVM and HVF).

Signed-off-by: David Edmondson <david.edmondson@oracle.com>
Message-Id: <20210705104632.2902400-8-david.edmondson@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-06 08:33:48 +02:00
David Edmondson
3568987f78 target/i386: Observe XSAVE state area offsets
Rather than relying on the X86XSaveArea structure definition directly,
the routines that manipulate the XSAVE state area should observe the
offsets declared in the x86_ext_save_areas array.

Currently the offsets declared in the array are derived from the
structure definition, resulting in no functional change.

Signed-off-by: David Edmondson <david.edmondson@oracle.com>
Message-Id: <20210705104632.2902400-7-david.edmondson@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-06 07:54:53 +02:00
David Edmondson
5aa10ab1a0 target/i386: Make x86_ext_save_areas visible outside cpu.c
Provide visibility of the x86_ext_save_areas array and associated type
outside of cpu.c.

Signed-off-by: David Edmondson <david.edmondson@oracle.com>
Message-Id: <20210705104632.2902400-6-david.edmondson@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-06 07:54:53 +02:00
David Edmondson
c0198c5f87 target/i386: Pass buffer and length to XSAVE helper
In preparation for removing assumptions about XSAVE area offsets, pass
a buffer pointer and buffer length to the XSAVE helper functions.

Signed-off-by: David Edmondson <david.edmondson@oracle.com>
Message-Id: <20210705104632.2902400-5-david.edmondson@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-06 07:54:53 +02:00
David Edmondson
fde7482100 target/i386: Clarify the padding requirements of X86XSaveArea
Replace the hard-coded size of offsets or structure elements with
defined constants or sizeof().

Signed-off-by: David Edmondson <david.edmondson@oracle.com>
Message-Id: <20210705104632.2902400-4-david.edmondson@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-06 07:54:53 +02:00
David Edmondson
436463b84b target/i386: Consolidate the X86XSaveArea offset checks
Rather than having similar but different checks in cpu.h and kvm.c,
move them all to cpu.h.
Message-Id: <20210705104632.2902400-3-david.edmondson@oracle.com>

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-06 07:54:53 +02:00
David Edmondson
ac7b7cae4e target/i386: Declare constants for XSAVE offsets
Declare and use manifest constants for the XSAVE state component
offsets.

Signed-off-by: David Edmondson <david.edmondson@oracle.com>
Message-Id: <20210705104632.2902400-2-david.edmondson@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-07-06 07:54:53 +02:00
Peter Maydell
711c0418c8 MIPS patches queue
- Extract nanoMIPS, microMIPS, Code Compaction from translate.c
 - Allow PCI config accesses smaller than 32-bit on Bonito64 device
 - Fix migration of g364fb device on Jazz Magnum
 - Fix dp8393x PROM checksum on Jazz Magnum and Quadra 800
 - Map the UART devices unconditionally on Jazz Magnum
 - Add functional test booting Linux on the Fuloong 2E
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmDfMnMACgkQ4+MsLN6t
 wN71nhAArpyoJ5mTkt54wAxZwxvqjWAyesvogV4pLIvOyNJmQcExY/Ly8Qb5dbDg
 2PEhCpDU7GlT7oCfgh7O5KrEjnqVfmHQzzbvQ0Ygq9kL5hdjaSSlHB/yeirU7CR1
 cMQXfj9kvRVa5Oayt3L+kiKgTA0f1vbGmnveiFxJKJupyVDtursstD3nSZCThSVL
 FWdFJbqLzvTY3cQqLBVq7jEnN7LzSYeYnq8Tvri0nuwoBwLJY382IljqsQZrGHzr
 Bbya3KFInMrQK5VAM0pOkfvPYXZmtJ8i8VuR6S+IdICiZ+61sknKRUq5z09/4NXA
 HaxarWO/fv07qd7q6Z2+i5Q6fiDrV4p2qfHeddM4Xwqvu8O98EPhaBE3veLOiNgr
 VxgkJFslI1gstje31qqvNjFxB+cOIBYjWTlIVu1xOuOKGWMjMT+9XcVLseweA2rT
 H/nTKnWTAiJ/mxT4KIv59SS0ZQa4QJ3CjYr26AcQ9YrJ9vCMay8rLPEn0iVlhB2H
 ZyW4Be84ynEvuAvvuWt1AXvFjT41Zqj4Px1M6Pa15e9eV6guiW1KV13Thb45gJqt
 LTQtoME83r3gUhQOcoZaowYh6ffCbjtW3eAcTZP3Zu7fOeBjSye+CvS9NJMtK3Vj
 0/YjtqGPUvjNy4sR9VqkRRPMZpHftMTRILxaFm8Y5tlThkAydw0=
 =rR+B
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/philmd/tags/mips-20210702' into staging

MIPS patches queue

- Extract nanoMIPS, microMIPS, Code Compaction from translate.c
- Allow PCI config accesses smaller than 32-bit on Bonito64 device
- Fix migration of g364fb device on Jazz Magnum
- Fix dp8393x PROM checksum on Jazz Magnum and Quadra 800
- Map the UART devices unconditionally on Jazz Magnum
- Add functional test booting Linux on the Fuloong 2E

# gpg: Signature made Fri 02 Jul 2021 16:36:19 BST
# gpg:                using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE
# gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [full]
# Primary key fingerprint: FAAB E75E 1291 7221 DCFD  6BB2 E3E3 2C2C DEAD C0DE

* remotes/philmd/tags/mips-20210702:
  hw/mips/jazz: Map the UART devices unconditionally
  hw/mips/jazz: specify correct endian for dp8393x device
  hw/m68k/q800: fix PROM checksum and MAC address storage
  qemu/bitops.h: add bitrev8 implementation
  dp8393x: remove onboard PROM containing MAC address and checksum
  hw/m68k/q800: move PROM and checksum calculation from dp8393x device to board
  hw/mips/jazz: move PROM and checksum calculation from dp8393x device to board
  dp8393x: convert to trace-events
  dp8393x: checkpatch fixes
  g364fb: add VMStateDescription for G364SysBusState
  g364fb: use RAM memory region for framebuffer
  tests/acceptance: Test Linux on the Fuloong 2E machine
  hw/pci-host/bonito: Allow PCI config accesses smaller than 32-bit
  hw/pci-host/bonito: Trace PCI config accesses smaller than 32-bit
  target/mips: Extract nanoMIPS ISA translation routines
  target/mips: Extract the microMIPS ISA translation routines
  target/mips: Extract Code Compaction ASE translation routines
  target/mips: Add declarations for generic TCG helpers

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-07-04 14:04:12 +01:00
Peter Maydell
04ea4d3cfd target/arm: Implement MVE shifts by register
Implement the MVE shifts by register, which perform
shifts on a single general-purpose register.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-19-peter.maydell@linaro.org
2021-07-02 11:48:38 +01:00
Peter Maydell
46321d47a9 target/arm: Implement MVE shifts by immediate
Implement the MVE shifts by immediate, which perform shifts
on a single general-purpose register.

These patterns overlap with the long-shift-by-immediates,
so we have to rearrange the grouping a little here.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-18-peter.maydell@linaro.org
2021-07-02 11:48:37 +01:00
Peter Maydell
0aa4b4c358 target/arm: Implement MVE long shifts by register
Implement the MVE long shifts by register, which perform shifts on a
pair of general-purpose registers treated as a 64-bit quantity, with
the shift count in another general-purpose register, which might be
either positive or negative.

Like the long-shifts-by-immediate, these encodings sit in the space
that was previously the UNPREDICTABLE MOVS/ORRS with Rm==13,15.
Because LSLL_rr and ASRL_rr overlap with both MOV_rxri/ORR_rrri and
also with CSEL (as one of the previously-UNPREDICTABLE Rm==13 cases),
we have to move the CSEL pattern into the same decodetree group.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-17-peter.maydell@linaro.org
2021-07-02 11:48:37 +01:00
Peter Maydell
f4ae6c8cbd target/arm: Implement MVE long shifts by immediate
The MVE extension to v8.1M includes some new shift instructions which
sit entirely within the non-coprocessor part of the encoding space
and which operate only on general-purpose registers.  They take up
the space which was previously UNPREDICTABLE MOVS and ORRS encodings
with Rm == 13 or 15.

Implement the long shifts by immediate, which perform shifts on a
pair of general-purpose registers treated as a 64-bit quantity, with
an immediate shift count between 1 and 32.

Awkwardly, because the MOVS and ORRS trans functions do not UNDEF for
the Rm==13,15 case, we need to explicitly emit code to UNDEF for the
cases where v8.1M now requires that.  (Trying to change MOVS and ORRS
is too difficult, because the functions that generate the code are
shared between a dozen different kinds of arithmetic or logical
instruction for all A32, T16 and T32 encodings, and for some insns
and some encodings Rm==13,15 are valid.)

We make the helper functions we need for UQSHLL and SQSHLL take
a 32-bit value which the helper casts to int8_t because we'll need
these helpers also for the shift-by-register insns, where the shift
count might be < 0 or > 32.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-16-peter.maydell@linaro.org
2021-07-02 11:48:37 +01:00
Peter Maydell
d43ebd9dc8 target/arm: Implement MVE VADDLV
Implement the MVE VADDLV insn; this is similar to VADDV, except
that it accumulates 32-bit elements into a 64-bit accumulator
stored in a pair of general-purpose registers.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-15-peter.maydell@linaro.org
2021-07-02 11:48:37 +01:00
Peter Maydell
2e6a4ce0f6 target/arm: Implement MVE VSHLC
Implement the MVE VSHLC insn, which performs a shift left of the
entire vector with carry in bits provided from a general purpose
register and carry out bits written back to that register.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-14-peter.maydell@linaro.org
2021-07-02 11:48:37 +01:00
Peter Maydell
d6f9e011e8 target/arm: Implement MVE saturating narrowing shifts
Implement the MVE saturating shift-right-and-narrow insns
VQSHRN, VQSHRUN, VQRSHRN and VQRSHRUN.

do_srshr() is borrowed from sve_helper.c.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-13-peter.maydell@linaro.org
2021-07-02 11:48:37 +01:00
Peter Maydell
162e265500 target/arm: Implement MVE VSHRN, VRSHRN
Implement the MVE shift-right-and-narrow insn VSHRN and VRSHRN.

do_urshr() is borrowed from sve_helper.c.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-12-peter.maydell@linaro.org
2021-07-02 11:48:37 +01:00
Peter Maydell
a78b25fa71 target/arm: Implement MVE VSRI, VSLI
Implement the MVE VSRI and VSLI insns, which perform a
shift-and-insert operation.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-11-peter.maydell@linaro.org
2021-07-02 11:48:37 +01:00
Peter Maydell
c226270703 target/arm: Implement MVE VSHLL
Implement the MVE VHLL (vector shift left long) insn.  This has two
encodings: the T1 encoding is the usual shift-by-immediate format,
and the T2 encoding is a special case where the shift count is always
equal to the element size.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-10-peter.maydell@linaro.org
2021-07-02 11:48:37 +01:00
Peter Maydell
3394116f47 target/arm: Implement MVE vector shift right by immediate insns
Implement the MVE vector shift right by immediate insns VSHRI and
VRSHRI.  As with Neon, we implement these by using helper functions
which perform left shifts but allow negative shift counts to indicate
right shifts.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-9-peter.maydell@linaro.org
2021-07-02 11:48:37 +01:00
Peter Maydell
f9ed61741e target/arm: Implement MVE vector shift left by immediate insns
Implement the MVE shift-vector-left-by-immediate insns VSHL, VQSHL
and VQSHLU.

The size-and-immediate encoding here is the same as Neon, and we
handle it the same way neon-dp.decode does.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-8-peter.maydell@linaro.org
2021-07-02 11:48:37 +01:00
Peter Maydell
eab8413985 target/arm: Implement MVE logical immediate insns
Implement the MVE logical-immediate insns (VMOV, VMVN,
VORR and VBIC). These have essentially the same encoding
as their Neon equivalents, and we implement the decode
in the same way.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-7-peter.maydell@linaro.org
2021-07-02 11:48:36 +01:00
Peter Maydell
e4667a5b5e target/arm: Use dup_const() instead of bitfield_replicate()
Use dup_const() instead of bitfield_replicate() in
disas_simd_mod_imm().

(We can't replace the other use of bitfield_replicate() in this file,
in logic_imm_decode_wmask(), because that location needs to handle 2
and 4 bit elements, which dup_const() cannot.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-6-peter.maydell@linaro.org
2021-07-02 11:48:36 +01:00
Peter Maydell
2c0286dba4 target/arm: Use asimd_imm_const for A64 decode
The A64 AdvSIMD modified-immediate grouping uses almost the same
constant encoding that A32 Neon does; reuse asimd_imm_const() (to
which we add the AArch64-specific case for cmode 15 op 1) instead of
reimplementing it all.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-5-peter.maydell@linaro.org
2021-07-02 11:48:36 +01:00
Peter Maydell
dfd66bc0f3 target/arm: Make asimd_imm_const() public
The function asimd_imm_const() in translate-neon.c is an
implementation of the pseudocode AdvSIMDExpandImm(), which we will
also want for MVE.  Move the implementation to translate.c, with a
prototype in translate.h.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-4-peter.maydell@linaro.org
2021-07-02 11:48:36 +01:00
Peter Maydell
303db86fc7 target/arm: Fix bugs in MVE VRMLALDAVH, VRMLSLDAVH
The initial implementation of the MVE VRMLALDAVH and VRMLSLDAVH
insns had some bugs:
 * the 32x32 multiply of elements was being done as 32x32->32,
   not 32x32->64
 * we were incorrectly maintaining the accumulator in its full
   72-bit form across all 4 beats of the insn; in the pseudocode
   it is squashed back into the 64 bits of the RdaHi:RdaLo
   registers after each beat

In particular, fixing the second of these allows us to recast
the implementation to avoid 128-bit arithmetic entirely.

Since the element size here is always 4, we can also drop the
parameterization of ESIZE to make the code a little more readable.

Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-3-peter.maydell@linaro.org
2021-07-02 11:48:36 +01:00
Peter Maydell
d59ccc30f6 target/arm: Fix MVE widening/narrowing VLDR/VSTR offset calculation
In do_ldst(), the calculation of the offset needs to be based on the
size of the memory access, not the size of the elements in the
vector.  This meant we were getting it wrong for the widening and
narrowing variants of the various VLDR and VSTR insns.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-2-peter.maydell@linaro.org
2021-07-02 11:48:36 +01:00
Joe Komlodi
103e7579dd target/arm: Check NaN mode before silencing NaN
If the CPU is running in default NaN mode (FPCR.DN == 1) and we execute
FRSQRTE, FRECPE, or FRECPX with a signaling NaN, parts_silence_nan_frac() will
assert due to fpst->default_nan_mode being set.

To avoid this, we check to see what NaN mode we're running in before we call
floatxx_silence_nan().

Signed-off-by: Joe Komlodi <joe.komlodi@xilinx.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1624662174-175828-2-git-send-email-joe.komlodi@xilinx.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-07-02 11:48:36 +01:00
Philippe Mathieu-Daudé
3f178b8d8c target/mips: Extract nanoMIPS ISA translation routines
Extract 4900 lines from the huge translate.c to a new file,
'nanomips_translate.c.inc'. As there are too many inter-
dependencies we don't compile it as another object, but
keep including it in the big translate.o. We gain in code
maintainability.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201120210844.2625602-13-f4bug@amsat.org>
2021-07-02 10:41:16 +02:00
Philippe Mathieu-Daudé
bf52c45a89 target/mips: Extract the microMIPS ISA translation routines
Extract 3200+ lines from the huge translate.c to a new file,
'micromips_translate.c.inc'. As there are too many inter-
dependencies we don't compile it as another object, but
keep including it in the big translate.o. We gain in code
maintainability.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201120210844.2625602-12-f4bug@amsat.org>
2021-07-02 10:41:15 +02:00
Philippe Mathieu-Daudé
3230bad963 target/mips: Extract Code Compaction ASE translation routines
Extract 1100+ lines from the huge translate.c to a new file,
'mips16e_translate.c.inc'. As there are too many inter-
dependencies we don't compile it as another object, but
keep including it in the big translate.o. We gain in code
maintainability.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201120210844.2625602-10-f4bug@amsat.org>
2021-07-02 10:41:15 +02:00
Philippe Mathieu-Daudé
d507663151 target/mips: Add declarations for generic TCG helpers
We want to extract the microMIPS ISA and Code Compaction ASE to
new compilation units.

We will first extract this code as included source files (.c.inc),
then make them new compilation units afterward.

The following methods are going to be used externally:

  micromips_translate.c.inc:1778:   gen_ldxs(ctx, rs, rt, rd);
  micromips_translate.c.inc:1806:   gen_align(ctx, 32, rd, rs, ...
  micromips_translate.c.inc:2859:   gen_addiupc(ctx, reg, offset, ...
  mips16e_translate.c.inc:444:      gen_addiupc(ctx, ry, offset, ...

To avoid too much code churn, it is simpler to declare these
prototypes in "translate.h" now.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210617174907.2904067-2-f4bug@amsat.org>
2021-07-02 10:41:15 +02:00
Peter Maydell
67e25eed97 TranslatorOps conversion for target/avr
TranslatorOps conversion for target/cris
 TranslatorOps conversion for target/nios2
 Simple vector operations on TCGv_i32
 Host signal fixes for *BSD
 Improvements to tcg bswap operations
 -----BEGIN PGP SIGNATURE-----
 
 iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmDba5cdHHJpY2hhcmQu
 aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV+SZwgAmnRoeWUNLSfW+ZAw
 Zw0QONRmKP/8j4MrW042XDxj+4If2oHkNmgl6IC7xAot0Q+tUJirOwn6pbHkkUbN
 VplwRvxlXeYcSLPz+yw9omBYZ3RwZfgJ65QamgJ32/+/4W4MqA2Os4zew5kACtE3
 oFnpnmLISG5ik1NfxCxtp6aKLgNcRGMHNYnKVlF3HNoOW3gfu4rN5xaCj8diqz0F
 73AtVlmqg/IKLE4gK429pZA/0Q+eSUipkDQ0vwKarnehwbXuFduDbSMYcBNOXRm9
 TErRIPsNSGxHxIEVJYcY9ZUgrOO39rd5T/r/NONrlwUuDPly2j5FsTTkNFhLJ/h7
 HO6xkQ==
 =UcDr
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/rth-gitlab/tags/pull-tcg-20210629' into staging

TranslatorOps conversion for target/avr
TranslatorOps conversion for target/cris
TranslatorOps conversion for target/nios2
Simple vector operations on TCGv_i32
Host signal fixes for *BSD
Improvements to tcg bswap operations

# gpg: Signature made Tue 29 Jun 2021 19:51:03 BST
# gpg:                using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg:                issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full]
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A  05C0 64DF 38E8 AF7E 215F

* remotes/rth-gitlab/tags/pull-tcg-20210629: (63 commits)
  tcg/riscv: Remove MO_BSWAP handling
  tcg/aarch64: Unset TCG_TARGET_HAS_MEMORY_BSWAP
  tcg/arm: Unset TCG_TARGET_HAS_MEMORY_BSWAP
  target/mips: Fix gen_mxu_s32ldd_s32lddr
  target/sh4: Improve swap.b translation
  target/i386: Improve bswap translation
  target/arm: Improve REVSH
  target/arm: Improve vector REV
  target/arm: Improve REV32
  tcg: Make use of bswap flags in tcg_gen_qemu_st_*
  tcg: Make use of bswap flags in tcg_gen_qemu_ld_*
  tcg: Add flags argument to tcg_gen_bswap16_*, tcg_gen_bswap32_i64
  tcg: Handle new bswap flags during optimize
  tcg/tci: Support bswap flags
  tcg/mips: Support bswap flags in tcg_out_bswap32
  tcg/mips: Support bswap flags in tcg_out_bswap16
  tcg/s390: Support bswap flags
  tcg/ppc: Use power10 byte-reverse instructions
  tcg/ppc: Support bswap flags
  tcg/ppc: Split out tcg_out_bswap64
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-07-01 20:29:33 +01:00
Richard Henderson
92ecfab50e target/mips: Fix gen_mxu_s32ldd_s32lddr
There were two bugs here: (1) the required endianness was
not present in the MemOp, and (2) we were not providing a
zero-extended input to the bswap as semantics required.

The best fix is to fold the bswap into the memory operation,
producing the desired result directly.

Acked-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00
Richard Henderson
b983a0e172 target/sh4: Improve swap.b translation
Remove TCG_BSWAP_IZ and the preceding zero-extension.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00
Richard Henderson
94fdf98721 target/i386: Improve bswap translation
Use a break instead of an ifdefed else.
There's no need to move the values through s->T0.
Remove TCG_BSWAP_IZ and the preceding zero-extension.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00
Richard Henderson
ebdd503d45 target/arm: Improve REVSH
The new bswap flags can implement the semantics exactly.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-06-29 10:04:57 -07:00