Changes:
* Add Enhanced Virtual Addressing (EVA) support
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.5 (GNU/Linux)
iQIVAwUAWXFmCyI464bV95fCAQJHpA//QvuBgHmN2z7Idp27IW84K0WL1IIz52hA
K98ZS/31CAHRNKN/4TqpZyFERYvmAvOrMTR3TEJISrl8M28dXNEaANjRu+RNJ+nk
Q410VG3Hr//C9GHVqMQ5SRMY8MgGGnBFpkwSW7O1Qn1cQiEB1PvFV5wZVpCEgJoO
x2KZvzMJNSYsWrmvFc79CvE0m/K5frO4L/XMKoSdu51cVy4zQdI5NS/G6JugTN1j
1p5x724Ic6duSlUZD91EvwUZEk88aeXGCaauMgiHYCkWNbY6he39GKRmRnhGYcIa
9olFHW1axKaE1F+1J99eggz3XDLNfr4zsiBnmzMmi+ajJDAVO1vWcQWLUXH7cd1y
ToxOnKel83EdDFfl+yaAGH4Ig/RRqUFaB7Qq3QOjEHlM5sCJ8vsLUuplCyBIdV6d
/BaS0v99FNt4qSSYpwRd/cPbPmYl0DbQatfcgsX2g4WSH3SyjZBg3L7NyFWmOAAg
fscLFafA5ic6XHmSKwsqmzPbOJcUXYrNnxLHw+Smg60hiqoWgF+4j34lDLEMsY1T
rIptlpU4GeXs05d+Q/ABEH7wcPJHPjoWTEkPzaFjXjuL+c9ZnQqnrI4j9gXsqY6u
Km8SoTPfqdwj5AW7KS4RrNz6g2f+GnXS4t8p9JK2krciH+gPVVBQEmf4t3qQAwPC
y2uVsoHtC2E=
=qlJH
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/yongbok/tags/mips-20170721' into staging
MIPS patches 2017-07-21
Changes:
* Add Enhanced Virtual Addressing (EVA) support
# gpg: Signature made Fri 21 Jul 2017 03:25:15 BST
# gpg: using RSA key 0x2238EB86D5F797C2
# gpg: Good signature from "Yongbok Kim <yongbok.kim@imgtec.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 8600 4CF5 3415 A5D9 4CFA 2B5C 2238 EB86 D5F7 97C2
* remotes/yongbok/tags/mips-20170721:
target/mips: Enable CP0_EBase.WG on MIPS64 CPUs
target/mips: Add EVA support to P5600
target/mips: Implement segmentation control
target/mips: Add segmentation control registers
target/mips: Add an MMU mode for ERL
target/mips: Abstract mmu_idx from hflags
target/mips: Check memory permissions with mem_idx
target/mips: Decode microMIPS EVA load & store instructions
target/mips: Decode MIPS32 EVA load & store instructions
target/mips: Prepare loads/stores for EVA
target/mips: Add CP0_Ebase.WG (write gate) support
target/mips: Weaken TLB flush on UX,SX,KX,ASID changes
target/mips: Fix TLBWI shadow flush for EHINV,XI,RI
target/mips: Fix MIPS64 MFC0 UserLocal on BE host
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
On NetBSD, where tolower() and toupper() are implemented using an
array lookup, the compiler warns if you pass a plain 'char'
to these functions:
gdbstub.c:914:13: warning: array subscript has type 'char'
This reflects the fact that toupper() and tolower() give
undefined behaviour if they are passed a value that isn't
a valid 'unsigned char' or EOF.
We have qemu_tolower() and qemu_toupper() to avoid this problem;
use them.
(The use in scsi-generic.c does not trigger the warning because
it passes a uint8_t; we switch it anyway, for consistency.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> for the s390 part.
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-id: 1500568290-7966-1-git-send-email-peter.maydell@linaro.org
Enable the CP0_EBase.WG (write gate) on the I6400 and MIPS64R2-generic
CPUs. This allows 64-bit guests to run KVM itself, which uses
CP0_EBase.WG to point CP0_EBase at XKPhys.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
Add the Enhanced Virtual Addressing (EVA) feature to the P5600 core
configuration, along with the related Segmentation Control (SC) feature
and writable CP0_EBase.WG bit.
This allows it to run Malta EVA kernels.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
Implement the optional segmentation control feature in the virtual to
physical address translation code.
The fixed legacy segment and xkphys handling is replaced with a dynamic
layout based on the segmentation control registers (which should be set
up even when the feature is not exposed to the guest).
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
[yongbok.kim@imgtec.com:
cosmetic changes]
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
The optional segmentation control registers CP0_SegCtl0, CP0_SegCtl1 &
CP0_SegCtl2 control the behaviour and required privilege of the legacy
virtual memory segments.
Add them to the CP0 interface so they can be read and written when
CP0_Config3.SC=1, and initialise them to describe the standard legacy
layout so they can be used in future patches regardless of whether they
are exposed to the guest.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
The segmentation control feature allows a legacy memory segment to
become unmapped uncached at error level (according to CP0_Status.ERL),
and in fact the user segment is already treated in this way by QEMU.
Add a new MMU mode for this state so that QEMU's mappings don't persist
between ERL=0 and ERL=1.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
[yongbok.kim@imgtec.com:
cosmetic changes]
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
The MIPS mmu_idx is sometimes calculated from hflags without an env
pointer available as cpu_mmu_index() requires.
Create a common hflags_mmu_index() for the purpose of this calculation
which can operate on any hflags, not just with an env pointer, and
update cpu_mmu_index() itself and gen_intermediate_code() to use it.
Also update debug_post_eret() and helper_mtc0_status() to log the MMU
mode with the status change (SM, UM, or nothing for kernel mode) based
on cpu_mmu_index() rather than directly testing hflags.
This will also allow the logic to be more easily updated when a new MMU
mode is added.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
When performing virtual to physical address translation, check the
required privilege level based on the mem_idx rather than the mode in
the hflags. This will allow EVA loads & stores to operate safely only on
user memory from kernel mode.
For the cases where the mmu_idx doesn't need to be overridden
(mips_cpu_get_phys_page_debug() and cpu_mips_translate_address()), we
calculate the required mmu_idx using cpu_mmu_index(). Note that this
only tests the MIPS_HFLAG_KSU bits rather than MIPS_HFLAG_MODE, so we
don't test the debug mode hflag MIPS_HFLAG_DM any longer. This should be
fine as get_physical_address() only compares against MIPS_HFLAG_UM and
MIPS_HFLAG_SM, neither of which should get set by compute_hflags() when
MIPS_HFLAG_DM is set.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
Implement decoding of microMIPS EVA load and store instruction groups in
the POOL31C pool. These use the same gen_ld(), gen_st(), gen_st_cond()
helpers as the MIPS32 decoding, passing the equivalent MIPS32 opcodes as
opc.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
Implement decoding of MIPS32 EVA loads and stores. These access the user
address space from kernel mode when implemented, so for each instruction
we need to check that EVA is available from Config5.EVA & check for
sufficient COP0 privilege (with the new check_eva()), and then override
the mem_idx used for the operation.
Unfortunately some Loongson 2E instructions use overlapping encodings,
so we must be careful not to prevent those from being decoded when EVA
is absent.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
EVA load and store instructions access the user mode address map, so
they need to use mem_idx of MIPS_HFLAG_UM. Update the various utility
functions to allow mem_idx to be more easily overridden from the
decoding logic.
Specifically we add a mem_idx argument to the op_ld/st_* helpers used
for atomics, and a mem_idx local variable to gen_ld(), gen_st(), and
gen_st_cond().
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
Add support for the CP0_EBase.WG bit, which allows upper bits to be
written (bits 31:30 on MIPS32, or bits 63:30 on MIPS64), along with the
CP0_Config5.CV bit to control whether the exception vector for Cache
Error exceptions is forced into KSeg1.
This is necessary on MIPS32 to support Segmentation Control and Enhanced
Virtual Addressing (EVA) extensions (where KSeg1 addresses may not
represent an unmapped uncached segment).
It is also useful on MIPS64 to allow the exception base to reside in
XKPhys, and possibly out of range of KSEG0 and KSEG1.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
[yongbok.kim@imgtec.com:
minor changes]
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
There is no need to invalidate any shadow TLB entries when the ASID
changes or when access to one of the 64-bit segments has been disabled,
since doing so doesn't reveal to software whether any TLB entries have
been evicted into the shadow half of the TLB.
Therefore weaken the tlb flushes in these cases to only flush the QEMU
TLB.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Tested-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
Writing specific TLB entries with TLBWI flushes shadow TLB entries
unless an existing entry is having its access permissions upgraded. This
is necessary as software would from then on expect the previous mapping
in that entry to no longer be in effect (even if QEMU has quietly
evicted it to the shadow TLB on a TLBWR).
However it won't do this if only EHINV, XI, or RI bits have been set,
even if that results in a reduction of permissions, so add the necessary
checks to invoke the flush when these bits are set.
Fixes: 2fb58b7374 ("target-mips: add RI and XI fields to TLB entry")
Fixes: 9456c2fbcd ("target-mips: add TLBINV support")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Tested-by: Yongbok Kim <yongbok.kim@imgtec.com>
[yongbok.kim@imgtec.com:
cosmetic changes]
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
Using MFC0 to read CP0_UserLocal uses tcg_gen_ld32s_tl, however
CP0_UserLocal is a target_ulong. On a big endian host with a MIPS64
target this reads and sign extends the more significant half of the
64-bit register.
Fix this by using ld_tl to load the whole target_ulong and ext32s_tl to
sign extend it, as done for various other target_ulong COP0 registers.
Fixes: d279279e2b ("target-mips: implement UserLocal Register")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Petar Jovanovic <petar.jovanovic@imgtec.com>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
Needed to implement a target-agnostic gen_intermediate_code()
in the future.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Alex Benneé <alex.benee@linaro.org>
Reviewed-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu>
Message-Id: <150002025498.22386.18051908483085660588.stgit@frigg.lan>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Suggested-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20170718045540.16322-10-f4bug@amsat.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Suggested-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20170718045540.16322-9-f4bug@amsat.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Done with the Coccinelle semantic patch
scripts/coccinelle/tcg_gen_extract.cocci.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Done with the Coccinelle semantic patch
scripts/coccinelle/tcg_gen_extract.cocci.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20170718045540.16322-6-f4bug@amsat.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Done with the Coccinelle semantic patch
scripts/coccinelle/tcg_gen_extract.cocci.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Acked-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170718045540.16322-5-f4bug@amsat.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Use the same mask to avoid having to load two different constants, as
suggested by Richard Henderson.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170516230159.4195-2-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
It is much shorter to reverse all 4 half-words in parallel
than extract, reverse, and deposit each in turn.
Suggested-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The flags are arranged such that we can manipulate them either
a whole, or as individual bytes. The computation within
cpu_get_tb_cpu_state is now reduced to a single load and mask.
Tested-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This value is constant for the cpu and does not need
to be stored within the TB.
Tested-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Now that we have a do_illegal label, use goto in order
to self-document the forcing of the exception.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170718200255.31647-22-rth@twiddle.net>
[aurel32: fix whitespace issues]
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
We do not need to emit N copies of raising an exception.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170718200255.31647-21-rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
We do not need to emit N copies of raising an exception.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170718200255.31647-20-rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
We do not need to emit N copies of raising an exception.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170718200255.31647-19-rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
We do not need to form full 64-bit quantities in order to perform
the move. This reduces code expansion on 64-bit hosts.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170718200255.31647-18-rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
This enforces proper alignment and makes the register update
more natural. Note that there is a more serious bug fix for
fmov {DX}Rn,@(R0,Rn) to use a store instead of a load.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170718200255.31647-17-rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Also add a debugging assert that we did signal illegal opc
for odd double-precision registers.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170718200255.31647-16-rth@twiddle.net>
[aurel32: fix whitespace issues]
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Compute which register bank to use once at the start of translation.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170718200255.31647-14-rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
We were treating FREG as an index and REG as a TCGv.
Making FREG return a TCGv is both less confusing and
a step toward cleaner banking of cpu_fregs.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170718200255.31647-12-rth@twiddle.net>
[aurel32: fix whitespace issues]
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>