Commit Graph

67 Commits

Author SHA1 Message Date
Benjamin Herrenschmidt
33595dc9f3 ppc: Fix generation if ISI/DSI vs. HV mode
Under some circumstances, we need to direct ISI and DSI interrupts
at the hypervisor, turning them into HISI/HDSI, and using different
SPRs (HDSISR and HDAR) depending on the combination of MSR_DR and
the corresponding VPM bits in LPCR.

This moves part of the code into helpers that are fixed to select
the right exception type and registers. On pre-P7 processors, LPCR
is 0 which provides the old behaviour of directing the interrupts
at the supervisor.

Thanks to Andrei Warkentin for finding a bug when HV=1

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
[clg: Merged a fix on POWERPC_EXCP_HDSI fixing the condition on
      msr_hv, from Andrei Warkentin <andrey.warkentin@gmail.com> ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-06-23 12:43:25 +10:00
Benjamin Herrenschmidt
c76c22d51d ppc: Add missing slbfee. instruction on ppc64 BookS processors
Used to lookup SLB entries by address, for some reason it was missing.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-06-07 13:10:45 +10:00
Benjamin Herrenschmidt
cd0c6f4735 ppc: Do some batching of TCG tlb flushes
On ppc64 especially, we flush the tlb on any slbie or tlbie instruction.

However, those instructions often come in bursts of 3 or more (context
switch will favor a series of slbie's for example to an slbia if the
SLB has less than a certain number of entries in it, and tlbie's can
happen in a series, with PAPR, H_BULK_REMOVE can remove up to 4 entries
at a time.

Doing a tlb_flush() each time is a waste of time. We end up doing a memset
of the whole TLB, reloading it for the next instruction, memset'ing again,
etc...

Those instructions don't have to take effect immediately. For slbie, they
can wait for the next context synchronizing event. For tlbie, the next
tlbsync.

This implements batching by keeping a flag that indicates that we have a
TLB in need of flushing. We check it on interrupts, rfi's, isync's and
tlbsync and flush the TLB if needed.

This reduces the number of tlb_flush() on a boot to a ubuntu installer
first dialog screen from roughly 360K down to 36K.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[clg: added a 'CPUPPCState *' variable in h_remove() and
      h_bulk_remove() ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
[dwg: removed spurious whitespace change, use 0/1 not true/false
      consistently, since tlb_need_flush has int type]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-05-30 13:20:04 +10:00
David Gibson
319de6fe6e target-ppc: Correct KVM synchronization for ppc_hash64_set_external_hpt()
ppc_hash64_set_external_hpt() was added in e5c0d3c "target-ppc: Add helpers
for updating a CPU's SDR1 and external HPT".  This helper contains a
cpu_synchronize_state() since it may need to push state back to KVM
afterwards.

This turns out to break things when it is used in the reset path, which is
the only current user.  It appears that kvm_vcpu_dirty is not being set
early in the reset path, so the cpu_synchronize_state() is clobbering state
set up by the early part of the cpu reset path with stale state from KVM.

This may require some changes to the generic cpu reset path to fix
properly, but as a short term fix we can just remove the
cpu_synchronize_state() from ppc_hash64_set_external_hpt(), and require any
non-reset path callers to do that manually.

Reported-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-05-27 09:40:22 +10:00
Paolo Bonzini
63c915526d cpu: move exec-all.h inclusion out of cpu.h
exec-all.h contains TCG-specific definitions.  It is not needed outside
TCG-specific files such as translate.c, exec.c or *helper.c.

One generic function had snuck into include/exec/exec-all.h; move it to
include/qom/cpu.h.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-05-19 16:42:29 +02:00
Paolo Bonzini
b2305601d3 target-ppc: do not use target_ulong in cpu-qom.h
Bring the PowerPCCPUClass handle_mmu_fault method type into line with
the one in CPUClass.

Using vaddr also makes the cpu-qom.h file target independent.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-05-19 13:08:05 +02:00
Markus Armbruster
da34e65cb4 include/qemu/osdep.h: Don't include qapi/error.h
Commit 57cb38b included qapi/error.h into qemu/osdep.h to get the
Error typedef.  Since then, we've moved to include qemu/osdep.h
everywhere.  Its file comment explains: "To avoid getting into
possible circular include dependencies, this file should not include
any other QEMU headers, with the exceptions of config-host.h,
compiler.h, os-posix.h and os-win32.h, all of which are doing a
similar job to this file and are under similar constraints."
qapi/error.h doesn't do a similar job, and it doesn't adhere to
similar constraints: it includes qapi-types.h.  That's in excess of
100KiB of crap most .c files don't actually need.

Add the typedef to qemu/typedefs.h, and include that instead of
qapi/error.h.  Include qapi/error.h in .c files that need it and don't
get it now.  Include qapi-types.h in qom/object.h for uint16List.

Update scripts/clean-includes accordingly.  Update it further to match
reality: replace config.h by config-target.h, add sysemu/os-posix.h,
sysemu/os-win32.h.  Update the list of includes in the qemu/osdep.h
comment quoted above similarly.

This reduces the number of objects depending on qapi/error.h from "all
of them" to less than a third.  Unfortunately, the number depending on
qapi-types.h shrinks only a little.  More work is needed for that one.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
[Fix compilation without the spice devel packages. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-03-22 22:20:15 +01:00
David Gibson
c18ad9a54b target-ppc: Eliminate kvmppc_kern_htab global
fa48b43 "target-ppc: Remove hack for ppc_hash64_load_hpte*() with HV KVM"
purports to remove a hack in the handling of hash page tables (HPTs)
managed by KVM instead of qemu.  However, it actually went in the wrong
direction.

That patch requires anything looking for an external HPT (that is one not
managed by the guest itself) to check both env->external_htab (for a qemu
managed HPT) and kvmppc_kern_htab (for a KVM managed HPT).  That's a
problem because kvmppc_kern_htab is local to mmu-hash64.c, but some places
which need to check for an external HPT are outside that, such as
kvm_arch_get_registers().  The latter was subtly broken by the earlier
patch such that gdbstub can no longer access memory.

Basically a KVM managed HPT is much more like a qemu managed HPT than it is
like a guest managed HPT, so the original "hack" was actually on the right
track.

This partially reverts fa48b43, so we again mark a KVM managed external HPT
by putting a special but non-NULL value in env->external_htab.  It then
goes further, using that marker to eliminate the kvmppc_kern_htab global
entirely.  The ppc_hash64_set_external_hpt() helper function is extended
to set that marker if passed a NULL value (if you're setting an external
HPT, but don't have an actual HPT to set, the assumption is that it must
be a KVM managed HPT).

This also has some flow-on changes to the HPT access helpers, required by
the above changes.

Reported-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Tested-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
2016-03-16 09:55:06 +11:00
David Gibson
e5c0d3ce40 target-ppc: Add helpers for updating a CPU's SDR1 and external HPT
When a Power cpu with 64-bit hash MMU has it's hash page table (HPT)
pointer updated by a write to the SDR1 register we need to update some
derived variables.  Likewise, when the cpu is configured for an external
HPT (one not in the guest memory space) some derived variables need to be
updated.

Currently the logic for this is (partially) duplicated in ppc_store_sdr1()
and in spapr_cpu_reset().  In future we're going to need it in some other
places, so make some common helpers for this update.

In addition the new ppc_hash64_set_external_hpt() helper also updates
SDR1 in KVM - it's not updated by the normal runtime KVM <-> qemu CPU
synchronization.  In a sense this belongs logically in the
ppc_hash64_set_sdr1() helper, but that is called from
kvm_arch_get_registers() so can't itself call cpu_synchronize_state()
without infinite recursion.  In practice this doesn't matter because
the only other caller is TCG specific.

Currently there aren't situations where updating SDR1 at runtime in KVM
matters, but there are going to be in future.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
2016-03-16 09:55:06 +11:00
Paolo Bonzini
508127e243 log: do not unnecessarily include qom/cpu.h
Split the bits that require it to exec/log.h.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-id: 1452174932-28657-8-git-send-email-den@openvz.org
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-02-03 09:19:10 +00:00
David Gibson
1114e712c9 target-ppc: Helper to determine page size information from hpte alone
h_enter() in the spapr code needs to know the page size of the HPTE it's
about to insert.  Unlike other paths that do this, it doesn't have access
to the SLB, so at the moment it determines this with some open-coded
tests which assume POWER7 or POWER8 page size encodings.

To make this more flexible add ppc_hash64_hpte_page_shift_noslb() to
determine both the "base" page size per segment, and the individual
effective page size from an HPTE alone.

This means that the spapr code should now be able to handle any page size
listed in the env->sps table.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: Alexander Graf <agraf@suse.de>
2016-01-30 23:49:27 +11:00
David Gibson
61a36c9b5a target-ppc: Add new TLB invalidate by HPTE call for hash64 MMUs
When HPTEs are removed or modified by hypercalls on spapr, we need to
invalidate the relevant pages in the qemu TLB.

Currently we do that by doing some complicated calculations to work out the
right encoding for the tlbie instruction, then passing that to
ppc_tlb_invalidate_one()... which totally ignores the argument and flushes
the whole tlb.

Avoid that by adding a new flush-by-hpte helper in mmu-hash64.c.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: Alexander Graf <agraf@suse.de>
2016-01-30 23:49:27 +11:00
David Gibson
be18b2b53e target-ppc: Use actual page size encodings from HPTE
At present the 64-bit hash MMU code uses information from the SLB to
determine the page size of a translation.  We do need that information to
correctly look up the hash table.  However the MMU also allows a
possibly larger page size to be encoded into the HPTE itself, which is used
to populate the TLB.  At present qemu doesn't check that, and so doesn't
support the MPSS "Multiple Page Size per Segment" feature.

This makes a start on allowing this, by adding an hpte_page_shift()
function which looks up the page size of an HPTE.  We use this to validate
page sizes encodings on faults, and populate the qemu TLB with larger
page sizes when appropriate.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: Alexander Graf <agraf@suse.de>
2016-01-30 23:37:38 +11:00
David Gibson
cd6a9bb6e9 target-ppc: Rework SLB page size lookup
Currently, the ppc_hash64_page_shift() function looks up a page size based
on information in an SLB entry.  It open codes the bit translation for
existing CPUs, however different CPU models can have different SLB
encodings.  We already store those in the 'sps' table in CPUPPCState, but
we don't currently enforce that that actually matches the logic in
ppc_hash64_page_shift.

This patch reworks lookup of page size from SLB in several ways:
  * ppc_store_slb() will now fail (triggering an illegal instruction
    exception) if given a bad SLB page size encoding
  * On success ppc_store_slb() stores a pointer to the relevant entry in
    the page size table in the SLB entry.  This is looked up directly from
    the published table of page size encodings, so can't get out ot sync.
  * ppc_hash64_htab_lookup() and others now use this precached page size
    information rather than decoding the SLB values
  * Now that callers have easy access to the page_shift,
    ppc_hash64_pte_raddr() amounts to just a deposit64(), so remove it and
    have the callers use deposit64() directly.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: Alexander Graf <agraf@suse.de>
2016-01-30 23:37:38 +11:00
David Gibson
bcd8123003 target-ppc: Rework ppc_store_slb
ppc_store_slb updates the SLB for PPC cpus with 64-bit hash MMUs.
Currently it takes two parameters, which contain values encoded as the
register arguments to the slbmte instruction, one register contains the
ESID portion of the SLBE and also the slot number, the other contains the
VSID portion of the SLBE.

We're shortly going to want to do some SLB updates from other code where
it is more convenient to supply the slot number and ESID separately, so
rework this function and its callers to work this way.

As a bonus, this slightly simplifies the emulation of segment registers for
when running a 32-bit OS on a 64-bit CPU.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: Alexander Graf <agraf@suse.de>
2016-01-30 23:37:38 +11:00
David Gibson
7ef23068bf target-ppc: Convert mmu-hash{32,64}.[ch] from CPUPPCState to PowerPCCPU
Like a lot of places these files include a mixture of functions taking
both the older CPUPPCState *env and newer PowerPCCPU *cpu.  Move a step
closer to cleaning this up by standardizing on PowerPCCPU, except for the
helper_* functions which are called with the CPUPPCState * from tcg.

Callers and some related functions are updated as well, the boundaries of
what's changed here are a bit arbitrary.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Alexander Graf <agraf@suse.de>
2016-01-30 23:37:38 +11:00
Peter Maydell
0d75590d91 ppc: Clean up includes
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.

This commit was created with scripts/clean-includes.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1453832250-766-6-git-send-email-peter.maydell@linaro.org
2016-01-29 15:07:22 +00:00
Paolo Bonzini
48880da696 ppc: cleanup logging
Avoid "naked" qemu_log, bring documentation for DEBUG #defines
up to date.

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-12-17 17:33:48 +01:00
Stefan Weil
a9ab06d118 target-ppc: Fix warnings from Sparse
Sparse report:

target-ppc/mmu-hash64.c:353:9: warning: returning void-valued expression
target-ppc/mmu-hash64.c:620:9: warning: returning void-valued expression

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
2015-03-09 15:00:08 +01:00
Aneesh Kumar K.V
ad3e67d05a target-ppc: Use right page size with hash table lookup
We look at two sizes specified in ISA (4K, 64K). If not found matching,
we consider it 16MB.

Without this patch we would fail to lookup address above 16MB range.
Below 16MB happened to work before because the kernel have a liner
mapping and we always looked up hash for 0xc000000000000000. The
actual real address was computed by using the 16MB offset
with the real address found with the above hash.

Without Fix:
(gdb) x/16x 0xc000000001000000
0xc000000001000000 <list_entries+453208>:       Cannot access memory at address 0xc000000001000000
(gdb)

With Fix:
(gdb)  x/16x 0xc000000001000000
0xc000000001000000 <list_entries+453208>:       0x00000000      0x00000000      0x00000000      0x00000000
0xc000000001000010 <list_entries+453224>:       0x00000000      0x00000000      0x00000000      0x00000000
0xc000000001000020 <list_entries+453240>:       0x00000000      0x00000000      0x00000000      0x00000000
0xc000000001000030 <list_entries+453256>:       0x00000000      0x00000000      0x00000000      0x00000000

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2015-03-09 14:59:53 +01:00
Antony Pavlov
339aaf5b7f qemu-log: add log category for MMU info
Running barebox on qemu-system-mips* with '-d unimp' overloads
stderr by very very many mips_cpu_handle_mmu_fault() messages:

  mips_cpu_handle_mmu_fault address=b80003fd ret 0 physical 00000000180003fd prot 3
  mips_cpu_handle_mmu_fault address=a0800884 ret 0 physical 0000000000800884 prot 3
  mips_cpu_handle_mmu_fault pc a080cd80 ad b80003fd rw 0 mmu_idx 0

So it's very difficult to find LOG_UNIMP message.

The mips_cpu_handle_mmu_fault() messages appear on enabling ANY
logging! It's not very handy.

Adding separate log category for *_cpu_handle_mmu_fault()
logging fixes the problem.

Signed-off-by: Antony Pavlov <antonynpavlov@gmail.com>
Acked-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1418489298-1184-1-git-send-email-antonynpavlov@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-12-16 18:43:19 +00:00
Richard Henderson
2ef6175aa7 tcg: Invert the inclusion of helper.h
Rather than include helper.h with N values of GEN_HELPER, include a
secondary file that sets up the macros to include helper.h.  This
minimizes the files that must be rebuilt when changing the macros
for file N.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-28 09:33:54 -07:00
Andreas Färber
0c591eb0a9 cputlb: Change tlb_set_page() argument to CPUState
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:52:47 +01:00
Andreas Färber
00c8cb0a36 cputlb: Change tlb_flush() argument to CPUState
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:52:47 +01:00
Andreas Färber
d0e39c5d70 target-ppc: Use PowerPCCPU in PowerPCCPUClass::handle_mmu_fault hook
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:20:48 +01:00
Andreas Färber
27103424c4 cpu: Move exception_index field from CPU_COMMON to CPUState
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:20:46 +01:00
Andreas Färber
33276f1b9c target-ppc: Clean up ENV_GET_CPU() usage
Commits fdfba1a298,
ab1da85791,
f606604f1c and
2c17449b30 added usages of ENV_GET_CPU()
macro in target-specific code.

Use ppc_env_get_cpu() instead.

Cc: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:01:48 +01:00
Aneesh Kumar K.V
c138593380 target-ppc: Update ppc_hash64_store_hpte to support updating in-kernel htab
This support updating htab managed by the hypervisor. Currently we don't have
any user for this feature. This actually bring the store_hpte interface
in-line with the load_hpte one. We may want to use this when we want to
emulate henter hcall in qemu for HV kvm.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
[ folded fix for the "warn_unused_result" build break in
  kvmppc_hash64_write_pte(), Greg Kurz <gkurz@linux.vnet.ibm.com> ]
Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2014-03-05 03:07:03 +01:00
Aneesh Kumar K.V
3f94170be3 target-ppc: Change the hpte store API
For updating in kernel htab we need to provide both pte0 and pte1, hence update
the interface to take pte0 and pte1 together

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
[ ldq_phys() API change, Greg Kurz <gkurz@linux.vnet.ibm.com> ]
Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2014-03-05 03:07:02 +01:00
Aneesh Kumar K.V
7c43bca004 target-ppc: Fix page table lookup with kvm enabled
With kvm enabled, we store the hash page table information in the hypervisor.
Use ioctl to read the htab contents. Without this we get the below error when
trying to read the guest address

 (gdb) x/10 do_fork
 0xc000000000098660 <do_fork>:   Cannot access memory at address 0xc000000000098660
 (gdb)

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
[ fixes for 32 bit build (casts!), ldq_phys() API change,
  Greg Kurz <gkurz@linux.vnet.ibm.com ]
Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2014-03-05 03:07:02 +01:00
Aneesh Kumar K.V
f3c75d42ad target-ppc: Fix htab_mask calculation
Correctly update the htab_mask using the return value of
KVM_PPC_ALLOCATE_HTAB ioctl. Also we don't update sdr1
on GET_SREGS for HV. We check for external htab and if
found true, we don't need to update sdr1

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
[ fixed pte group offset computation in ppc_hash64_htab_lookup() that
  caused TCG to fail, Greg Kurz <gkurz@linux.vnet.ibm.com> ]
Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2014-03-05 03:07:02 +01:00
Cédric Le Goater
363248e8c9 mmu-hash64: fix Virtual Page Class Key Protection
commit f80872e21c (mmu-hash64: Implement
Virtual Page Class Key Protection) added a new page protection
mechanism based on page keys and the AMR register to control access.

The AMR register allows or prohibits reads and/or writes on a page
depending on the control bits associated to the key. A store or a load
is only permitted if the associate bit is 0 (Power ISA), and not 1 as
the code is currently doing. This patch modifies ppc_hash64_amr_prot()
to correct the protection check.

This issue was unvailed by commit ccfb53ed6360cac0d5f6f7915ca9ae7eed866412
(target-ppc: fix Authority Mask Register init value) which changed the
initialisation value of the AMR register to 0.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2014-03-05 03:06:25 +01:00
Andreas Färber
77710e7aec target-ppc: Change LOG_MMU_STATE() argument to CPUState
Choose CPUState rather than PowerPCCPU since doing a CPU() cast on the
macro argument would hide type mismatches.

Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-07-09 21:33:04 +02:00
Andreas Färber
a0762859ae log: Change log_cpu_state[_mask]() argument to CPUState
Since commit 878096eeb2 (cpu: Turn
cpu_dump_{state,statistics}() into CPUState hooks) CPUArchState is no
longer needed.

Add documentation and make the functions available through qemu/log.h
outside NEED_CPU_H to allow use in qom/cpu.c. Moving them to qom/cpu.h
was not yet possible due to convoluted include paths, so that some
devices grow an implicit and unneeded dependency on qom/cpu.h for now.

Acked-by: Michael Walle <michael@walle.cc> (for lm32)
Reviewed-by: Richard Henderson <rth@twiddle.net>
[AF: Simplified mb_cpu_do_interrupt() and do_interrupt_all() changes]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-07-09 21:33:04 +02:00
Andreas Färber
cb446ecab7 kvm: Change cpu_synchronize_state() argument to CPUState
Change Monitor::mon_cpu to CPUState as well.

Reviewed-by: liguang <lig.fnst@cn.fujitsu.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-06-28 13:25:12 +02:00
David Gibson
f80872e21c mmu-hash64: Implement Virtual Page Class Key Protection
Version 2.06 of the Power architecture describes an additional page
protection mechanism.  Each virtual page has a "class" (0-31) recorded in
the PTE.  The AMR register contains bits which can prohibit reads and/or
writes on a class by class basis.  Interestingly, the AMR is userspace
readable and writable, however user mode writes are masked by the contents
of the UAMOR which is privileged.

This patch implements this protection mechanism, along with the AMR and
UAMOR SPRs.  The architecture also specifies a hypervisor-privileged AMOR
register which masks user and supervisor writes to the AMR and UAMOR.  We
leave this out for now, since we don't at present model hypervisor mode
correctly in any case.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[agraf: fix 32-bit hosts]
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:53 +01:00
David Gibson
caa597bd9f mmu-hash*: Merge translate and fault handling functions
ppc_hash{32,64}_handle_mmu_fault() is now the only caller of
ppc_hash{32,64{_translate(), so this patch combines them together.  This
means that instead of one returning a variety of non-obvious error codes
which then get translated into the various mmu exception conditions, we can
just generate the exceptions as we discover problems in the translation
path.  This also removes the last usage of mmu_ctx_hash{32,64}.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:53 +01:00
David Gibson
5883d8b296 mmu-hash*: Don't use full ppc_hash{32, 64}_translate() path for get_phys_page_debug()
Currently the hash mmu versionsof get_phys_page_debug() use the same
ppc64_hash64_translate() function to do the translation logic as the normal
mm fault handler code.

That sounds like a good idea, but has some complications. The debug path
doesn't need, or even want some parts of the full translation path, like
permissions checking.  Furthermore, the pte flags update included in the
normal path means that the debug call is not quite side effect free.

This patch, therefore, reimplements get_phys_page_debug as the minimal
required subset of the full translation path.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>`z
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:53 +01:00
David Gibson
75d5ec89c0 mmu-hash*: Correctly mask RPN from hash PTE
BEHAVIOUR CHANGE

At present we take the whole of word 1 of the hash PTE as the real page
number used to calculate the translated address.  This is incorrect,
because it leaves the flags from the low bits of PTE word 1 in place in the
rpm.  We mostly get away with that because the value is later masked by
TARGET_PAGE_MASK.

More recent 64-bit CPUs also have a small number of flag bits (PP0 and
KEY) in the top bits of PTE word 1.  Any guest which used those bits would
fail with the current code.

This patch fixes the problem by correctly masking out the RPN field of
PTE word 1.  This is safe, even for older CPUs which didn't have PP0 and
KEY, because although the RPN notionally extended to the very top of PTE
word 1, none of those CPUs actually implemented that many real address
bits.

We add analogous masking to the 32-bit code, even though it also doesn't
have the high flag bits, for consistency and clarity.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:53 +01:00
David Gibson
6d11d998bb mmu-hash*: Clean up real address calculation
More recent 64-bit hash MMUs support multiple page sizes, and PTEs for
large pages only include the offset of the whole large page.  But the qemu
tlb only handles pages of the base size (4k) so we need to break up the
large pages into 4k pieces for the qemu tlb.  To do that we have a somewhat
awkward piece of code that adds the folds address bits 4k and the page size
from the virtual address into the real address from the pte.

This patch simplifies this redefining the raddr output of
ppc_hash64_translate() to be the full real address of the faulting address,
rather than just the (4k) page offset.  Computing that turns out to be
simpler, and is fine for the caller, since it already masks with
TARGET_PAGE_MASK before inserting into the qemu tlb.

The multiple page size complication doesn't exist for 32-bit hash mmus, but
we make an analogous cleanup there for consistency.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:52 +01:00
David Gibson
b344074642 mmu-hash*: Clean up PTE flags update
Currently the ppc_hash{32,64}_pte_update_flags() helper functions update a
PTE's referenced and changed bits as necessary to reflect the access.  It
is somewhat long winded, though.  This patch open codes them in their
(single) callers, in a simpler way.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:52 +01:00
David Gibson
57d0a39d98 mmu-hash64: Factor SLB N bit into permissions bits
BEHAVIOUR CHANGE

Currently, for 64-bit hash mmu, the execute protection bit placed into the
qemu tlb is based only on the N (No execute) bit from the PTE.  However,
No Execute can also be set at the segment level.  We do check this on
execute faults, but this still means we could incorrectly allow execution
of code from a No Execute segment, if a prior read or write fault caused
the page to be loaded into the qemu tlb with PROT_EXEC set.

To correct this, we (re-)check the segment level no execute permission when
generating the protection bits for the qemu tlb.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:52 +01:00
David Gibson
e01b444523 mmu-hash*: Clean up permission checking
Currently checking of PTE permission bits is split messily amongst
ppc_hash{32,64}_pp_check(), ppc_hash{32,64}_check_prot() and their callers.
This patch cleans this up to have the new function
ppc_hash{32,64}_pte_prot() compute the page permissions from the SLBE (for
64-bit) or segment register (32-bit) and the pte.  A greatly simplified
version of the actual permissions check is then open coded in the callers.

The 32-bit version of ppc_hash32_pte_prot() is implemented in terms of
ppc_hash32_pp_prot(), a renamed and slightly cleaned up version of the old
ppc_hash32_pp_check(), which is also used for checking BAT permissions on
the 601.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:52 +01:00
David Gibson
87dc3fd13e mmu-hash*: Don't update PTE flags when permission is denied
BEHAVIOUR CHANGE

Currently if ppc_hash{32,64}_translate() finds a PTE matching the given
virtual address, it will always update the PTE's R & C (Referenced and
Changed) bits.  This happens even if the PTE's permissions mean we are
about to deny the translation.

This is clearly a bug, although we get away with it because:
  a) It will only incorrectly set, never reset the bits, which should not
cause guest correctness problems.
  b) Linux guests never use the R & C bits anyway.

This patch fixes the behaviour, only updating R & C when access is granted
by the PTE.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:52 +01:00
David Gibson
6a9801106e mmu-hash*: Fold pte_check*() logic into caller
With previous cleanups made, the 32-bit and 64-bit pte_check*() functions
are pretty trivial and only have one call site.  This patch therefore
clarifies the overall code flow by folding those functions into their
call site.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:51 +01:00
David Gibson
1814889876 mmu-hash64: Clean up ppc_hash64_htab_lookup()
This patch makes a general cleanup of the address mangling logic in
ppc_hash64_htab_lookup().  In particular it now avoids repeatedly switching
on the segment size.  The lack of SLB and multiple segment sizes on 32-bit
means an analogous cleanup is not needed there.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:50 +01:00
David Gibson
7f3bdc2d8e mmu-hash*: Remove permission checking from find_pte{32, 64}()
find_pte{32,64}() are poorly named, since they both find a PTE and do
permissions checking of it.  This patch makes them only locate a matching
PTE, moving the permission checking and other logic to the caller.  We
rename the resulting search functions ppc_hash{32,64}_htab_lookup().

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:50 +01:00
David Gibson
a1ff751abd mmu-hash*: Make find_pte{32, 64} do more of the job of finding ptes
find_pte{32,64}() are not particularly well named.  They only "find" a PTE
within a given PTE group, and they also do permissions checking and other
things.

This patch makes it somewhat close to matching the name, by folding the
search of both primary and secondary hash bucket into it, along with the
various address bit shuffling to determine the right hash buckets.

In the 32-bit case we also remove the code for splitting large pages into
4k pieces for the qemu tlb, since no 32-bit hash MMUs support multiple page
sizes.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:50 +01:00
David Gibson
aea390e4be mmu-hash*: Separate PTEG searching from permissions checking
find_pte{32,64{() do several things.  First they search through a PTEG
ooking for a PTE matching our virtual address.  Then they do permissions
checking and other processing on that PTE.

This patch separates the search by VA out from the rest.  The search is
combined with the pte{32,64}_match() functions into new
ppc_has{32,64}_pteg_search() functions.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:50 +01:00
David Gibson
f95d7cc7fe mmu-hash*: Don't keep looking for PTEs after we find a match
BEHAVIOUR CHANGE

The ppc hash mmu hashes each virtual address to a primary and secondary
possible hash bucket (aka PTE group or PTEG) each with 8 PTEs.  Then we
need a linear search through the PTEs to find the correct one for the
virtual address we're translating.

It is a programming error for the guest to insert multiple PTEs mapping the
same virtual address into a PTEG - in this case the ppc architecture says
the MMU can either act as if just one was present, or give a machine check.
Currently our code takes the first matching PTE in a PTEG if it finds a
successful translation.  But if a matching PTE is found, but permission
bits don't allow the access, we keep looking through the PTEG, checking
that any other matching PTEs contain an identical translation.

That behaviour is perhaps not exactly wrong, but it's certainly not useful.
This patch changes it to always just find the first matching PTE in a PTEG.

In addition, if we get a permissions problem on the primary PTEG, we then
search the secondary PTEG.  This is incorrect - a permission denying PTE
in the primary PTEG should not be overwritten by an access granting PTE in
the secondary (although again, it would be a programming error for the
guest to set up such a situation anyway).  So additionally we update the
code to only search the secondary PTEG if no matching PTE is found in the
primary at all.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:50 +01:00