Commit Graph

12 Commits

Author SHA1 Message Date
Philippe Mathieu-Daudé
26aa3d9aec mips: introduce internal.h and cleanup cpu.h
no logical change, only code movement (and fix a comment typo).

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Igor Mammedov <imammedo@redhat.com>
Tested-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
2017-09-21 13:24:34 +01:00
Leon Alrae
2d1847ec1c target-mips: apply CP0.PageMask before writing into TLB entry
PFN0 and PFN1 have to be masked out with PageMask_Mask.

Signed-off-by: Leon Alrae <leon.alrae@imgtec.com>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
[Yongbok Kim:
  Added commit message]
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
2017-08-02 22:18:11 +01:00
James Hogan
cec56a733d target/mips: Add segmentation control registers
The optional segmentation control registers CP0_SegCtl0, CP0_SegCtl1 &
CP0_SegCtl2 control the behaviour and required privilege of the legacy
virtual memory segments.

Add them to the CP0 interface so they can be read and written when
CP0_Config3.SC=1, and initialise them to describe the standard legacy
layout so they can be used in future patches regardless of whether they
are exposed to the guest.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
2017-07-20 22:42:26 +01:00
James Hogan
42c86612d5 target/mips: Add an MMU mode for ERL
The segmentation control feature allows a legacy memory segment to
become unmapped uncached at error level (according to CP0_Status.ERL),
and in fact the user segment is already treated in this way by QEMU.

Add a new MMU mode for this state so that QEMU's mappings don't persist
between ERL=0 and ERL=1.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
[yongbok.kim@imgtec.com:
  cosmetic changes]
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
2017-07-20 22:42:26 +01:00
James Hogan
b0fc600322 target/mips: Abstract mmu_idx from hflags
The MIPS mmu_idx is sometimes calculated from hflags without an env
pointer available as cpu_mmu_index() requires.

Create a common hflags_mmu_index() for the purpose of this calculation
which can operate on any hflags, not just with an env pointer, and
update cpu_mmu_index() itself and gen_intermediate_code() to use it.

Also update debug_post_eret() and helper_mtc0_status() to log the MMU
mode with the status change (SM, UM, or nothing for kernel mode) based
on cpu_mmu_index() rather than directly testing hflags.

This will also allow the logic to be more easily updated when a new MMU
mode is added.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
2017-07-20 22:42:26 +01:00
James Hogan
74dbf824a1 target/mips: Add CP0_Ebase.WG (write gate) support
Add support for the CP0_EBase.WG bit, which allows upper bits to be
written (bits 31:30 on MIPS32, or bits 63:30 on MIPS64), along with the
CP0_Config5.CV bit to control whether the exception vector for Cache
Error exceptions is forced into KSeg1.

This is necessary on MIPS32 to support Segmentation Control and Enhanced
Virtual Addressing (EVA) extensions (where KSeg1 addresses may not
represent an unmapped uncached segment).

It is also useful on MIPS64 to allow the exception base to reside in
XKPhys, and possibly out of range of KSEG0 and KSEG1.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
[yongbok.kim@imgtec.com:
  minor changes]
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
2017-07-20 22:42:26 +01:00
James Hogan
9658e4c342 target/mips: Weaken TLB flush on UX,SX,KX,ASID changes
There is no need to invalidate any shadow TLB entries when the ASID
changes or when access to one of the 64-bit segments has been disabled,
since doing so doesn't reveal to software whether any TLB entries have
been evicted into the shadow half of the TLB.

Therefore weaken the tlb flushes in these cases to only flush the QEMU
TLB.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Tested-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
2017-07-20 22:42:26 +01:00
James Hogan
eff6ff9431 target/mips: Fix TLBWI shadow flush for EHINV,XI,RI
Writing specific TLB entries with TLBWI flushes shadow TLB entries
unless an existing entry is having its access permissions upgraded. This
is necessary as software would from then on expect the previous mapping
in that entry to no longer be in effect (even if QEMU has quietly
evicted it to the shadow TLB on a TLBWR).

However it won't do this if only EHINV, XI, or RI bits have been set,
even if that results in a reduction of permissions, so add the necessary
checks to invoke the flush when these bits are set.

Fixes: 2fb58b7374 ("target-mips: add RI and XI fields to TLB entry")
Fixes: 9456c2fbcd ("target-mips: add TLBINV support")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Tested-by: Yongbok Kim <yongbok.kim@imgtec.com>
[yongbok.kim@imgtec.com:
  cosmetic changes]
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
2017-07-20 22:42:26 +01:00
Yongbok Kim
d394698d73 target/mips: hold BQL for timer interrupts
Hold BQL when accessing timer which can cause interrupts

Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
2017-03-09 10:41:48 +00:00
Alex Bennée
d10eb08f5d cputlb: drop flush_global flag from tlb_flush
We have never has the concept of global TLB entries which would avoid
the flush so we never actually use this flag. Drop it and make clear
that tlb_flush is the sledge-hammer it has always been.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
[DG: ppc portions]
Acked-by: David Gibson <david@gibson.dropbear.id.au>
2017-01-13 14:24:37 +00:00
Richard Henderson
1a0196c5c7 target-mips: Use clz opcode
Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-01-10 08:06:11 -08:00
Thomas Huth
fcf5ef2ab5 Move target-* CPU file into a target/ folder
We've currently got 18 architectures in QEMU, and thus 18 target-xxx
folders in the root folder of the QEMU source tree. More architectures
(e.g. RISC-V, AVR) are likely to be included soon, too, so the main
folder of the QEMU sources slowly gets quite overcrowded with the
target-xxx folders.
To disburden the main folder a little bit, let's move the target-xxx
folders into a dedicated target/ folder, so that target-xxx/ simply
becomes target/xxx/ instead.

Acked-by: Laurent Vivier <laurent@vivier.eu> [m68k part]
Acked-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de> [tricore part]
Acked-by: Michael Walle <michael@walle.cc> [lm32 part]
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com> [s390x part]
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> [s390x part]
Acked-by: Eduardo Habkost <ehabkost@redhat.com> [i386 part]
Acked-by: Artyom Tarasenko <atar4qemu@gmail.com> [sparc part]
Acked-by: Richard Henderson <rth@twiddle.net> [alpha part]
Acked-by: Max Filippov <jcmvbkbc@gmail.com> [xtensa part]
Reviewed-by: David Gibson <david@gibson.dropbear.id.au> [ppc part]
Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> [cris&microblaze part]
Acked-by: Guan Xuetao <gxt@mprc.pku.edu.cn> [unicore32 part]
Signed-off-by: Thomas Huth <thuth@redhat.com>
2016-12-20 21:52:12 +01:00