Commit Graph

32 Commits

Author SHA1 Message Date
David Gibson
f80872e21c mmu-hash64: Implement Virtual Page Class Key Protection
Version 2.06 of the Power architecture describes an additional page
protection mechanism.  Each virtual page has a "class" (0-31) recorded in
the PTE.  The AMR register contains bits which can prohibit reads and/or
writes on a class by class basis.  Interestingly, the AMR is userspace
readable and writable, however user mode writes are masked by the contents
of the UAMOR which is privileged.

This patch implements this protection mechanism, along with the AMR and
UAMOR SPRs.  The architecture also specifies a hypervisor-privileged AMOR
register which masks user and supervisor writes to the AMR and UAMOR.  We
leave this out for now, since we don't at present model hypervisor mode
correctly in any case.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[agraf: fix 32-bit hosts]
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:53 +01:00
David Gibson
caa597bd9f mmu-hash*: Merge translate and fault handling functions
ppc_hash{32,64}_handle_mmu_fault() is now the only caller of
ppc_hash{32,64{_translate(), so this patch combines them together.  This
means that instead of one returning a variety of non-obvious error codes
which then get translated into the various mmu exception conditions, we can
just generate the exceptions as we discover problems in the translation
path.  This also removes the last usage of mmu_ctx_hash{32,64}.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:53 +01:00
David Gibson
5883d8b296 mmu-hash*: Don't use full ppc_hash{32, 64}_translate() path for get_phys_page_debug()
Currently the hash mmu versionsof get_phys_page_debug() use the same
ppc64_hash64_translate() function to do the translation logic as the normal
mm fault handler code.

That sounds like a good idea, but has some complications. The debug path
doesn't need, or even want some parts of the full translation path, like
permissions checking.  Furthermore, the pte flags update included in the
normal path means that the debug call is not quite side effect free.

This patch, therefore, reimplements get_phys_page_debug as the minimal
required subset of the full translation path.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>`z
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:53 +01:00
David Gibson
75d5ec89c0 mmu-hash*: Correctly mask RPN from hash PTE
BEHAVIOUR CHANGE

At present we take the whole of word 1 of the hash PTE as the real page
number used to calculate the translated address.  This is incorrect,
because it leaves the flags from the low bits of PTE word 1 in place in the
rpm.  We mostly get away with that because the value is later masked by
TARGET_PAGE_MASK.

More recent 64-bit CPUs also have a small number of flag bits (PP0 and
KEY) in the top bits of PTE word 1.  Any guest which used those bits would
fail with the current code.

This patch fixes the problem by correctly masking out the RPN field of
PTE word 1.  This is safe, even for older CPUs which didn't have PP0 and
KEY, because although the RPN notionally extended to the very top of PTE
word 1, none of those CPUs actually implemented that many real address
bits.

We add analogous masking to the 32-bit code, even though it also doesn't
have the high flag bits, for consistency and clarity.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:53 +01:00
David Gibson
6d11d998bb mmu-hash*: Clean up real address calculation
More recent 64-bit hash MMUs support multiple page sizes, and PTEs for
large pages only include the offset of the whole large page.  But the qemu
tlb only handles pages of the base size (4k) so we need to break up the
large pages into 4k pieces for the qemu tlb.  To do that we have a somewhat
awkward piece of code that adds the folds address bits 4k and the page size
from the virtual address into the real address from the pte.

This patch simplifies this redefining the raddr output of
ppc_hash64_translate() to be the full real address of the faulting address,
rather than just the (4k) page offset.  Computing that turns out to be
simpler, and is fine for the caller, since it already masks with
TARGET_PAGE_MASK before inserting into the qemu tlb.

The multiple page size complication doesn't exist for 32-bit hash mmus, but
we make an analogous cleanup there for consistency.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:52 +01:00
David Gibson
b344074642 mmu-hash*: Clean up PTE flags update
Currently the ppc_hash{32,64}_pte_update_flags() helper functions update a
PTE's referenced and changed bits as necessary to reflect the access.  It
is somewhat long winded, though.  This patch open codes them in their
(single) callers, in a simpler way.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:52 +01:00
David Gibson
57d0a39d98 mmu-hash64: Factor SLB N bit into permissions bits
BEHAVIOUR CHANGE

Currently, for 64-bit hash mmu, the execute protection bit placed into the
qemu tlb is based only on the N (No execute) bit from the PTE.  However,
No Execute can also be set at the segment level.  We do check this on
execute faults, but this still means we could incorrectly allow execution
of code from a No Execute segment, if a prior read or write fault caused
the page to be loaded into the qemu tlb with PROT_EXEC set.

To correct this, we (re-)check the segment level no execute permission when
generating the protection bits for the qemu tlb.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:52 +01:00
David Gibson
e01b444523 mmu-hash*: Clean up permission checking
Currently checking of PTE permission bits is split messily amongst
ppc_hash{32,64}_pp_check(), ppc_hash{32,64}_check_prot() and their callers.
This patch cleans this up to have the new function
ppc_hash{32,64}_pte_prot() compute the page permissions from the SLBE (for
64-bit) or segment register (32-bit) and the pte.  A greatly simplified
version of the actual permissions check is then open coded in the callers.

The 32-bit version of ppc_hash32_pte_prot() is implemented in terms of
ppc_hash32_pp_prot(), a renamed and slightly cleaned up version of the old
ppc_hash32_pp_check(), which is also used for checking BAT permissions on
the 601.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:52 +01:00
David Gibson
87dc3fd13e mmu-hash*: Don't update PTE flags when permission is denied
BEHAVIOUR CHANGE

Currently if ppc_hash{32,64}_translate() finds a PTE matching the given
virtual address, it will always update the PTE's R & C (Referenced and
Changed) bits.  This happens even if the PTE's permissions mean we are
about to deny the translation.

This is clearly a bug, although we get away with it because:
  a) It will only incorrectly set, never reset the bits, which should not
cause guest correctness problems.
  b) Linux guests never use the R & C bits anyway.

This patch fixes the behaviour, only updating R & C when access is granted
by the PTE.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:52 +01:00
David Gibson
6a9801106e mmu-hash*: Fold pte_check*() logic into caller
With previous cleanups made, the 32-bit and 64-bit pte_check*() functions
are pretty trivial and only have one call site.  This patch therefore
clarifies the overall code flow by folding those functions into their
call site.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:51 +01:00
David Gibson
1814889876 mmu-hash64: Clean up ppc_hash64_htab_lookup()
This patch makes a general cleanup of the address mangling logic in
ppc_hash64_htab_lookup().  In particular it now avoids repeatedly switching
on the segment size.  The lack of SLB and multiple segment sizes on 32-bit
means an analogous cleanup is not needed there.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:50 +01:00
David Gibson
7f3bdc2d8e mmu-hash*: Remove permission checking from find_pte{32, 64}()
find_pte{32,64}() are poorly named, since they both find a PTE and do
permissions checking of it.  This patch makes them only locate a matching
PTE, moving the permission checking and other logic to the caller.  We
rename the resulting search functions ppc_hash{32,64}_htab_lookup().

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:50 +01:00
David Gibson
a1ff751abd mmu-hash*: Make find_pte{32, 64} do more of the job of finding ptes
find_pte{32,64}() are not particularly well named.  They only "find" a PTE
within a given PTE group, and they also do permissions checking and other
things.

This patch makes it somewhat close to matching the name, by folding the
search of both primary and secondary hash bucket into it, along with the
various address bit shuffling to determine the right hash buckets.

In the 32-bit case we also remove the code for splitting large pages into
4k pieces for the qemu tlb, since no 32-bit hash MMUs support multiple page
sizes.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:50 +01:00
David Gibson
aea390e4be mmu-hash*: Separate PTEG searching from permissions checking
find_pte{32,64{() do several things.  First they search through a PTEG
ooking for a PTE matching our virtual address.  Then they do permissions
checking and other processing on that PTE.

This patch separates the search by VA out from the rest.  The search is
combined with the pte{32,64}_match() functions into new
ppc_has{32,64}_pteg_search() functions.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:50 +01:00
David Gibson
f95d7cc7fe mmu-hash*: Don't keep looking for PTEs after we find a match
BEHAVIOUR CHANGE

The ppc hash mmu hashes each virtual address to a primary and secondary
possible hash bucket (aka PTE group or PTEG) each with 8 PTEs.  Then we
need a linear search through the PTEs to find the correct one for the
virtual address we're translating.

It is a programming error for the guest to insert multiple PTEs mapping the
same virtual address into a PTEG - in this case the ppc architecture says
the MMU can either act as if just one was present, or give a machine check.
Currently our code takes the first matching PTE in a PTEG if it finds a
successful translation.  But if a matching PTE is found, but permission
bits don't allow the access, we keep looking through the PTEG, checking
that any other matching PTEs contain an identical translation.

That behaviour is perhaps not exactly wrong, but it's certainly not useful.
This patch changes it to always just find the first matching PTE in a PTEG.

In addition, if we get a permissions problem on the primary PTEG, we then
search the secondary PTEG.  This is incorrect - a permission denying PTE
in the primary PTEG should not be overwritten by an access granting PTE in
the secondary (although again, it would be a programming error for the
guest to set up such a situation anyway).  So additionally we update the
code to only search the secondary PTEG if no matching PTE is found in the
primary at all.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:50 +01:00
David Gibson
bb218042c8 mmu-hash*: Cleanup segment-level NX check
On the ppc hash mmus, no-execute can be set at the segment level (on more
recent 64-bit hash mmus it can also be set at the page level).  This patch
separates out this check to make it clearer what is going on, and avoiding
excessive indentation of the remaining translation code.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:49 +01:00
David Gibson
65d61643d0 mmu-hash*: Combine ppc_hash{32, 64}_get_physical_address and get_segment{32, 64}()
After previous work, ppc_hash{32,64}_get_physical_address() are almost
trivial wrappers around get_segment{32,64}() which does nearly all the work of
translating an address according to the hash mmu model.  Therefore combine the
two functions into one, under the better name of
ppc_hash{32,64}_translate().

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:49 +01:00
David Gibson
f078cd46de mmu-hash*: Remove eaddr field from mmu_ctx_hash{32, 64}
The eaddr field of mmu_ctx_hash{32,64} is effectively just used to pass the
effective address from get_segment{32,64}() to find_pte{32,64}().  Just
pass it as a normal parameter instead.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:49 +01:00
David Gibson
ba36ed1005 mmu-hash64: Remove nx from mmu_ctx_hash64
The nx field in mmu_ctx_hash64 is used in two different functions.  But its
used for slightly different things in each place, and the value is never
propagated between them.  In other words, it might as well be two local
variables.  This patch makes it so.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:49 +01:00
David Gibson
91cda45b69 mmu-hash*: Reduce use of access_type
In ppc env->access_type is updated by e.g. integer load/stores with
ACCESS_INT floating point load/stores with ACCESS_FLOAT and so forth.  In
hash mmu fault paths it can also b set to ACCESS_CODE for instruction
fetch accesses.

But the only place which uses anything more of the access_type than
whether it is instruction fetch or data access is the direct store segment
handling.  Instruction versus data access can be more simply determined
from the rw value passed down from the top.

This changes the code to use rw in preference to checking access_type.
For the 32-bit case there is a small amount of code (for direct store
segments) that still needs the full access type.  Instead of passing it
all the way down the stack, we retrieve it from the env structure, which
is where it came anyway, before this patch.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:49 +01:00
David Gibson
dffdaf6162 mmu-hash*: Add hash pte load/store helpers
On real hardware the ppc hash page table is stored in memory; accordingly
our mmu emulation code can read a hash page table in guest memory.  But,
when paravirtualized under PAPR, the real hash page table is in host
memory, accessible to the guest only via hypercalls.  We model this by
also allowing the MMU emulation code to access a specially allocated hash
page table outside the guest's memory image. At present these two options
are implemented with some ugly conditionals at each access point in the mmu
emulation code.  In the implementation of the PAPR hypercalls, we assume
the external hash table.

This patch cleans things up by adding helpers to load and store from the
hash table for both 32-bit and 64-bit hash mmus.  The 64-bit versions
handle both the in-guest-memory and outside guest memory cases.  The 32-bit
versions only handle the in-guest-memory case since no 32-bit systems can
have an external hash table at present.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:48 +01:00
David Gibson
d5aea6f367 mmu-hash*: Add header file for definitions
Currently cpu.h contains a number of definitions relating to the 64-bit
hash MMU.  Some are used in the MMU emulation code, but some are only used
in the spapr MMU management hcall implementations.

This patch moves these definitions (except for a few that are needed
more widely) into mmu-hash64.h header, shared between the MMU emulation
code and the spapr hcall code.  The MMU emulation code is also updated to
actually use a number of those definitions in place of hard coded
constants.

Similarly, we add new analogous definitions to mmu-hash32.h and use those
in place of many hard-coded constants in mmu-hash32.c

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[agraf: fix 32-bit hosts]
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:48 +01:00
David Gibson
5dc68eb0e4 target-ppc: mmu_ctx_t should not be a global type
mmu_ctx_t is currently defined in cpu.h.  However it is used for temporary
information relating to mmu translation, and is only used in mmu_helper.c
and (now) mmu-hash{32,64}.c.  Furthermore it contains information which
should be specific to particular MMU types.  Therefore, move its definition
to mmu_helper.c.  mmu-hash{32,64}.c are converted to use new data types
private to the relevant MMUs (identical to mmu_ctx_t for now, but that will
change in future patches).

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:48 +01:00
David Gibson
59191721a1 target-ppc: Don't share get_pteg_offset() between 32 and 64-bit
The get_pteg_offset() helper function is currently shared between 32-bit
and 64-bit hash mmus, taking a parameter for the hash pte size.  In the
64-bit paths, it's only called in one place, and it's a trivial
calculation.  This patch, therefore, open codes it for 64-bit.  The
remaining version, which is used in two places is made 32-bit only and
moved to mmu-hash32.c.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:48 +01:00
David Gibson
496272a701 target-ppc: Disentangle hash mmu helper functions
The newly separated paths for hash mmus rely on several helper functions
which are still shared with 32-bit hash mmus: pp_check(), check_prot() and
pte_update_flags().  While these don't have ugly ifdefs on the mmu type,
they're not very well thought out, so sharing them impedes cleaning up the
hash mmu paths.  For now, put near-duplicate versions into mmu-hash64.c and
mmu-hash32.c, leaving the old version in mmu_helper.c for 6xx software
loaded tlb implementations.  The hash 32 and software loaded
implementations are simplfied slightly, using the fact that no 32-bit CPUs
implement the 3rd page protection bit.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:48 +01:00
David Gibson
f2ad6be83b target-ppc: Disentangle hash mmu versions of cpu_get_phys_page_debug()
cpu_get_phys_page_debug() is a trivial wrapper around
get_physical_address().  But even the signature of
get_physical_address() has some things we'd like to clean up on a
per-mmu basis, so this patch moves the test on mmu model out to
cpu_get_phys_page_debug(), moving the version for 64-bit hash MMUs out
to mmu-hash64.c and the version for 32-bit hash MMUs to mmu-hash32.c

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:48 +01:00
David Gibson
25de24ab83 target-ppc: Disentangle hash mmu paths for cpu_ppc_handle_mmu_fault
cpu_ppc_handle_mmu_fault() calls get_physical_address() (whose behaviour
depends on MMU type) then, if that fails, issues an appropriate exception
- which again has a number of dependencies on MMU type.

This patch starts converting cpu_ppc_handle_mmu_fault() to have a
single switch on MMU type, calling MMU specific fault handler
functions which deal with both translation and exception delivery
appropriately for the MMU type.  We convert 32-bit and 64-bit hash
MMUs to this new model, but the existing code is left in place for
other MMU types for now.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:47 +01:00
David Gibson
629bd516fd target-ppc: Disentangle get_physical_address() paths
Depending on the MSR state, for 64-bit hash MMUs, get_physical_address
can either call check_physical (which has further tests for mmu type)
or get_segment64.  Similarly for 32-bit hash MMUs we can either call
check_physucal or get_bat() and get_segment32().

This patch splits off the whole get_physical_addresss() path for hash
MMUs into 32-bit and 64-bit versions, handling real mode correctly for
such MMUs without going to check_physical and rechecking the mmu type.
Correspondingly, the hash MMU specific paths in check_physical() are
removed.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:47 +01:00
David Gibson
0480884f14 target-ppc: Disentangle get_segment()
The poorly named get_segment() function handles most of the address
translation logic for hash-based MMUs.  It has many ugly conditionals on
whether the MMU is 32-bit or 64-bit.

This patch splits the function into 32 and 64-bit versions, using the
switch on mmu_type that's already in the caller
(get_physical_address()) to select the right one.  Most of the
original function remains in mmu_helper.c to support the 6xx software
loaded TLB implementations (cleaning those up is a project for another
day).

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:47 +01:00
David Gibson
c69b6151e7 target-ppc: Disentangle find_pte()
32-bit and 64-bit hash MMU implementations currently share a find_pte
function.  This results in a whole bunch of ugly conditionals in the shared
function, and not all that much actually shared code.

This patch separates out the 32-bit and 64-bit versions, putting then
in mmu-hash64.c and mmu-has32.c, and removes the conditionals from
both versions.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:47 +01:00
David Gibson
9d7c3f4a29 target-ppc: Disentangle pte_check()
Currently support for both 32-bit and 64-bit hash MMUs share an
implementation of pte_check.  But there are enough differences that this
means the shared function has several very ugly conditionals on "is_64b".

This patch cleans things up by separating out the 64-bit version
(putting it into mmu-hash64.c) and the 32-bit hash version (putting it
in mmu-hash32.c).  Another copy remains in mmu_helper.c, which is used
for the 6xx software loaded TLB paths.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:47 +01:00
David Gibson
10b4652543 target-ppc: Move SLB handling into a mmu-hash64.c
As a first step to disentangling the handling for 64-bit hash MMUs from
the rest, we move the code handling the Segment Lookaside Buffer (SLB)
(which only exists on 64-bit hash MMUs) into a new mmu-hash64.c file.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:46 +01:00