Fix this error:
/src/qemu/target-ppc/helper.c: In function 'booke206_tlb_to_page_size':
/src/qemu/target-ppc/helper.c:1296:14: error: variable 'tlbncfg' set but not used [-Werror=unused-but-set-variable]
Tested-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
We already had all the code available to have doorbell exceptions
be handled properly. It was just disabled.
Enable it, so we can rely on it.
Signed-off-by: Alexander Graf <agraf@suse.de>
We can have TLBs that only support a single page size. This is defined
by the absence of the AVAIL flag in TLBnCFG. If this is the case, we
currently write invalid size info into the TLB, but override it on
internal fault.
Let's move the check over to tlbwe, so we don't have the AVAIL check in
the hotter fault path.
Signed-off-by: Alexander Graf <agraf@suse.de>
Our internal helpers to fetch TLB entries were not able to tell us
that an entry doesn't even exist. Pass an error out if we hit such
a case to not accidently pass beyond the TLB array.
Signed-off-by: Alexander Graf <agraf@suse.de>
We might want to call the tlb check function without actually caring about
the real address resolution. Check if we really should write the value
back.
Signed-off-by: Alexander Graf <agraf@suse.de>
When run with a PPC Book3S (server) CPU Currently 'info tlb' in the
qemu monitor reports "dump_mmu: unimplemented". However, during
bringup work, it can be quite handy to have the SLB entries, which are
available in the CPUPPCState. This patch adds an implementation of
info tlb for book3s, which dumps the SLB.
Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Alex Graf has already made qemu support KVM for the pseries machine
when using the Book3S-PR KVM variant (which runs the guest in
usermode, emulating supervisor operations). This code allows gets us
very close to also working with KVM Book3S-HV (using the hypervisor
capabilities of recent POWER CPUs).
This patch moves us another step towards Book3S-HV support by
correctly handling SMT (multithreaded) POWER CPUs. There are two
parts to this:
* Querying KVM to check SMT capability, and if present, adjusting the
cpu numbers that qemu assigns to cause KVM to assign guest threads
to cores in the right way (this isn't automatic, because the POWER
HV support has a limitation that different threads on a single core
cannot be in different guests at the same time).
* Correctly informing the guest OS of the SMT thread to core mappings
via the device tree.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
This definition is backward compatible with MAV=1.0 as long as
the guest does not set reserved bits in MAS1/MAS4.
Also, fix the shift in booke206_tlb_to_page_size -- it's the base
that should be able to hold a 4G page size, not the shift count.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Those blanks violate the coding conventions, see
scripts/checkpatch.pl.
Blanks missing after colons in the changed lines were added.
This patch does not try to fix tabs, long lines and other
problems in the changed lines, therefore checkpatch.pl reports
many violations.
Signed-off-by: Stefan Weil <weil@mail.berlios.de>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
When an exception occurs on BookE, we need to set ESR bits to expose
to the guest information on what exactly happened. Add the obvious ones.
Reported-by: Jason Wessel <jason.wessel@windriver.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
The BookE spec specifies a number of ESR bits. Add defines for them
so we can use them later on.
Reported-by: Jason Wessel <jason.wessel@windriver.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Parameter is_softmmu (and its evil mutant twin brother is_softmuu)
is not used in cpu_*_handle_mmu_fault() functions, remove them
and adjust callers.
Acked-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Do not allocate TCG-only resources like the translation buffer when
running over KVM or XEN. Saves a "few" bytes in the qemu address space
and is also conceptually cleaner.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
* 'ppc-next' of git://repo.or.cz/qemu/agraf:
PPC: move TLBs to their own arrays
PPC: 440: Use 440 style MMU as default, so Qemu knows the MMU type
PPC: E500: Use MAS registers instead of internal TLB representation
PPC: Only set lower 32bits with mtmsr
PPC: update openbios firmware
PPC: mpc8544ds: Add hypervisor node
PPC: calculate kernel,initrd,cmdline locations dynamically
target-ppc: Handle memory-forced I/O controller access
PPC: E500: Implement reboot controller
Until now, we've created a union over multiple different TLB types and
allocated that union. While it's a waste of memory (and cache) to allocate
TLB information for a TLB type with much information when you only need
little, it also inflicts another issue.
With the new KVM API, we can now share the TLB between KVM and qemu, but
for that to work we need to have both be in the same layout. We can't just
stretch it over to fit some internal different TLB representation.
Hence this patch moves all TLB types to their own array, allowing us to only
address and allocate exactly the boundaries required for the specific TLB
type at hand.
Signed-off-by: Alexander Graf <agraf@suse.de>
The natural format for e500 cores to do TLB manipulation with are the MAS
registers. Instead of converting them into some internal representation
and back again when the guest reads them, we can just keep the data
identical to the way the guest passed it to us.
The main advantage of this approach is that we're getting closer to being
able to share MMU data with KVM using shared memory, so that we don't need
to copy lots of MMU data back and forth all the time. For this to work
however, another patch is required that gets rid of the TLB union, as that
destroys our memory layout that needs to be identical with the kernel one.
Signed-off-by: Alexander Graf <agraf@suse.de>
On at least the PowerPC 601, a direct-store (T=1) with bus unit ID 0x07F
is special-cased as memory-forced I/O controller access. It is supposed
to be checked immediately if T=1, bypassing all protection mechanisms
and acting cache-inhibited and global.
Signed-off-by: Hervé Poussineau <hpoussin@reactos.org>
Simplified by avoiding reindentation. Added explanatory comments.
Cc: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <andreas.faerber@web.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch removes all references to signal.h when qemu-common.h is included
as they become redundant.
Signed-off-by: Alexandre Raymond <cerbere@gmail.com>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Early ppc64 CPUs include a hack to partially simulate the ppc32 segment
registers, by translating writes to them into writes to the SLB. This is
not used by any current Linux kernel, but it is used by the openbios used
in the qemu mac99 model.
Commit 81762d6dd0, cleaning up the SLB
handling introduced a bug in this code, breaking the openbios currently in
qemu. Specifically, there was an off by one error bitshuffling the
register format used by mtsr into the format needed for the SLB load,
causing the flag bits to end up in the wrong place. This caused the
storage keys to be wrong under openbios, meaning that the translation code
incorrectly thought a legitimate access was a permission violation.
This patch fixes the bug, at the same time it fixes some build bug in the
MMU debugging code (only exposed when DEBUG_MMU is enabled).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Most of the code to support e500 style MMUs is already in place, but
we're missing on some of the special TLB0-TLB1 handling code and slightly
different TLB modification.
This patch adds support for the FSL style MMU.
Signed-off-by: Alexander Graf <agraf@suse.de>
On pSeries logical partitions, excepting the old POWER4-style full system
partitions, the guest does not have direct access to the hardware page
table. Instead, the pagetable exists in hypervisor memory, and the guest
must manipulate it with hypercalls.
However, our current pSeries emulation more closely resembles the old
style where the guest must set up and handle the pagetables itself. This
patch converts it to act like a modern partition.
This involves two things: first, the hash translation path is modified to
permit the has table to be stored externally to the emulated machine's
RAM. The pSeries machine init code configures the CPUs to use this mode.
Secondly, we emulate the PAPR hypercalls for manipulating the external
hashed page table.
Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
This adds emulation support for the recent POWER7 cpu to qemu. It's far
from perfect - it's missing a number of POWER7 features so far, including
any support for VSX or decimal floating point instructions. However, it's
close enough to boot a kernel with the POWER7 PVR.
Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Traditionally, the "segments" used for the two-stage translation used on
powerpc MMUs were 256MB in size. This was the only option on all hash
page table based 32-bit powerpc cpus, and on the earlier 64-bit hash page
table based cpus. However, newer 64-bit cpus also permit 1TB segments
This patch adds support for 1TB segment translation to the qemu code.
Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Currently the path handling hash page table translation in get_segment()
has a mix of common and 32 or 64 bit specific code. However the
division is not done terribly well which results in a lot of messy code
flipping between common and divided paths.
This patch improves the organization, consolidating several divided paths
into one. This in turn allows simplification of some code in
get_segment(), removing a number of ugly interim variables.
This new factorization will also make it easier to add support for the 1T
segments added in newer CPUs.
Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Currently, get_segment() has a variable called hash. However it doesn't
(quite) get the hash value for the ppc hashed page table. Instead it
gets the hash shifted - effectively the offset of the hash bucket within
the hash page table.
As well, as being different to the normal use of plain "hash" in the
architecture documentation, this usage necessitates some awkward 32/64
dependent masks and shifts which clutter up the path in get_segment().
This patch alters the code to use raw hash values through get_segment()
including storing raw hashes instead of pte group offsets in the ctx
structure. This cleans up the path noticeably.
This does necessitate 32/64 dependent shifts when the hash values are
taken out of the ctx structure and used, but those paths already have
32/64 bit variants so this is less awkward than it was in get_segment().
Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
On ppc machines with hash table MMUs, the special purpose register SDR1
contains both the base address of the encoded size (hashed) page tables.
At present, we interpret the SDR1 value within the address translation
path. But because the encodings of the size for 32-bit and 64-bit are
different this makes for a confusing branch on the MMU type with a bunch
of curly shifts and masks in the middle of the translate path.
This patch cleans things up by moving the interpretation on SDR1 into the
helper function handling the write to the register. This leaves a simple
pre-sanitized base address and mask for the hash table in the CPUState
structure which is easier to work with in the translation path.
This makes the translation path more readable. It addresses the FIXME
comment currently in the mtsdr1 helper, by validating the SDR1 value during
interpretation. Finally it opens the way for emulating a pSeries-style
partition where the hash table used for translation is not mapped into
the guests's RAM.
Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
The slb_lookup() function, used in the ppc translation path returns a
number of slb entry fields in reference parameters. However, only one
of the two callers of slb_lookup() actually wants this information.
This patch, therefore, makes slb_lookup() return a simple pointer to the
located SLB entry (or NULL), and the caller which needs the fields can
extract them itself.
Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
For a 64-bit PowerPC target, qemu correctly implements translation
through the segment lookaside buffer. Likewise it supports the
slbmte instruction which is used to load entries into the SLB.
However, it does not emulate the slbmfee and slbmfev instructions
which read SLB entries back into registers. Because these are
only occasionally used in guests (mostly for debugging) we get
away with it.
However, given the recent SLB cleanups, it becomes quite easy to
implement these, and thereby allow, amongst other things, a guest
Linux to use xmon's command to dump the SLB.
Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
PowerPC and POWER chips since the POWER4 and 970 have a special
hypervisor mode, and a corresponding form of the system call
instruction which traps to the hypervisor.
qemu currently has stub implementations of hypervisor mode. That
is, the outline is there to allow qemu to run a PowerPC hypervisor
under emulation. There are a number of details missing so this
won't actually work at present, but the idea is there.
What there is no provision at all, is for qemu to instead emulate
the hypervisor itself. That is to have hypercalls trap into qemu
and their result be emulated from qemu, rather than running
hypervisor code within the emulated system.
Hypervisor hardware aware KVM implementations are in the works and
it would be useful for debugging and development to also allow
full emulation of the same para-virtualized guests as such a KVM.
Therefore, this patch adds a hook which will allow a machine to
set up emulation of hypervisor calls.
Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Currently the SLB information when emulating a PowerPC 970 is
storeed in a structure with the unhelpfully named fields 'tmp'
and 'tmp64'. While the layout in these fields does match the
description of the SLB in the architecture document, it is not
convenient either for looking up the SLB, or for emulating the
slbmte instruction.
This patch, therefore, reorganizes the SLB entry structure to be
divided in the the "ESID related" and "VSID related" fields as
they are divided in instructions accessing the SLB.
In addition to making the code smaller and more readable, this will
make it easier to implement for the 1TB segments used in more
recent PowerPC chips.
Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Only Mac-on-Linux stuff used video.x, OpenBIOS does not need it.
Remove video.x MoL hacks.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
* Fix swapped reading of tlblo/hi.
* Fix tlb exec permissions
Signed-off-by: John Clark <clarkjc@runbox.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
According to the Book3S spec, the interrupt context starts with an MSR
value that is rather simple. If we leave out the HV case, it's almost
always 0.
To reflect this, let's redesign the way that MSR value gets calculated.
Using this, we also squash the bug where MSR_POW can slip through into
the interrupt handler MSR.
Reported-by: Thomas Monjalon <thomas.monjalon@openwide.fr>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
This line was a bit clear.
The next lines set or reset this bit (LE) depending of another bit (ILE).
So the first line is useless.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Since commit 2ada0ed, "Return From Interrupt" is broken for PPC processors
because some interrupt specifics bits of SRR1 are copied to MSR.
SRR1 is a save of MSR during interrupt.
During RFI, MSR must be restored from SRR1.
But some bits of SRR1 are interrupt-specific and are not used for MSR saving.
This is the specification (ISA 2.06) at chapter 6.4.3 (Interrupt Processing):
"2. Bits 33:36 and 42:47 of SRR1 or HSRR1 are loaded with information specific
to the interrupt type.
3. Bits 0:32, 37:41, and 48:63 of SRR1 or HSRR1 are loaded with a copy of the
corresponding bits of the MSR."
Below is a representation of MSR bits which are not saved:
0:15 16:31 32 33:36 37:41 42:47 48:63
——— | ——— | — X X X X — — — — — X X X X X X | ————
0000 0000 | 7 | 8 | 3 | F | 0000
History:
In the initial Qemu implementation (e1833e1), the mask 0x783F0000 was used for
saving MSR in SRR1. But all the bits 32:47 were cleared during RFI restoring.
This was wrong. The commit 2ada0ed explains that this breaks Altivec.
Indeed, bit 38 (for Altivec support) must be saved and restored.
The change of 2ada0ed was to restore all the bits of SRR1 to MSR.
But it's also wrong.
Explanation:
As an example, let's see what's happening after a TLB miss.
According to the e300 manual (E300CORERM table 5-6), the TLB miss interrupts
set the bits 44-47 for KEY, I/D, WAY and S/L. These bits are specifics to the
interrupt and must not be copied into MSR at the end of the interrupt.
With the current implementation, a TLB miss overwrite bits POW, TGPR and ILE.
Fix:
It shouldn't be needed to filter-out bits on MSR saving when interrupt occurs.
Specific bits overwrite MSR ones in SRR1.
But at the end of interrupt (RFI), specifics bits must be cleared before
restoring MSR from SRR1. The mask 0x783F0000 apply here.
Discussion:
The bits of the mask 0x783F0000 are cleared after an interrupt.
I cannot find a specification which talks about this
but I assume it is the truth since Linux can run this way.
Maybe it's not perfect but it's better (works for e300).
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
QEMU uses a fixed page size for the CPU TLB. If the guest uses large
pages then we effectively split these into multiple smaller pages, and
populate the corresponding TLB entries on demand.
When the guest invalidates the TLB by virtual address we must invalidate
all entries covered by the large page. However the address used to
invalidate the entry may not be present in the QEMU TLB, so we do not
know which regions to clear.
Implementing a full vaiable size TLB is hard and slow, so just keep a
simple address/mask pair to record which addresses may have been mapped by
large pages. If the guest invalidates this region then flush the
whole TLB.
Signed-off-by: Paul Brook <paul@codesourcery.com>
We were masking 1TB SLB entries on the feature bit of 16 MB pages. Obviously
that breaks, so let's just ignore 1TB SLB entries for now and instead do
16MB pages correctly.
This fixes PPC64 Linux boot with -m above 256.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>