Commit Graph

1194 Commits

Author SHA1 Message Date
Prerna Saxena
8d43ea1c97 target-ppc: Add POWER8 v1.0 CPU model
This patch adds CPU PVR definition for POWER8,
and enables QEMU to launch guests on POWER8 hardware.

Signed-off-by: Prerna Saxena <prerna@linux.vnet.ibm.com>
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Paul Mackerras <paulus@samba.org>
Reviewed-by: Andreas Farber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-07-11 18:51:23 +02:00
Julio Guerra
7162bdea75 e600 core for MPC86xx processors
MPC86xx processors are based on the e600 core, which is not the case
in qemu where it is based on the 7400 processor.

This patch creates the e600 core and instantiates the MPC86xx
processors based on it. Therefore, adding the high BATs, the SPRG
4..7 registers, which are e600-specific [1], and a HW MMU model (as 7400).
This allows to define the MPC8610 processor too.

Tested with a kernel using the HW TLB misses.

[1] http://cache.freescale.com/files/32bit/doc/ref_manual/E600CORERM.pdf

Signed-off-by: Julio Guerra <guerr@julio.in>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-07-11 18:51:23 +02:00
Andreas Färber
91b1df8cf9 cpu: Move reset logging to CPUState
x86 was using additional CPU_DUMP_* flags, so make that configurable in
CPUClass::reset_dump_flags.

This adds reset logging for alpha, unicore32 and xtensa.

Acked-by: Michael Walle <michael@walle.cc> (for lm32)
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-07-09 21:33:04 +02:00
Andreas Färber
77710e7aec target-ppc: Change LOG_MMU_STATE() argument to CPUState
Choose CPUState rather than PowerPCCPU since doing a CPU() cast on the
macro argument would hide type mismatches.

Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-07-09 21:33:04 +02:00
Andreas Färber
a0762859ae log: Change log_cpu_state[_mask]() argument to CPUState
Since commit 878096eeb2 (cpu: Turn
cpu_dump_{state,statistics}() into CPUState hooks) CPUArchState is no
longer needed.

Add documentation and make the functions available through qemu/log.h
outside NEED_CPU_H to allow use in qom/cpu.c. Moving them to qom/cpu.h
was not yet possible due to convoluted include paths, so that some
devices grow an implicit and unneeded dependency on qom/cpu.h for now.

Acked-by: Michael Walle <michael@walle.cc> (for lm32)
Reviewed-by: Richard Henderson <rth@twiddle.net>
[AF: Simplified mb_cpu_do_interrupt() and do_interrupt_all() changes]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-07-09 21:33:04 +02:00
Andreas Färber
213fe1f513 target-ppc: Change gen_intermediate_code_internal() argument to PowerPCCPU
Also use bool type while at it.

Prepares for moving singlestep_enabled field to CPUState.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-07-09 21:33:03 +02:00
Andreas Färber
09c6a63a61 target-ppc: Don't overuse ENV_GET_CPU()
Commit b632a148b6 (target-ppc: QOM method
dispatch for MMU fault handling) introduced a use of ENV_GET_CPU()
inside target-ppc/ code. Use ppc_env_get_cpu() instead.

Purely cosmetic, non-functional change to aid in locating and removing
ENV_GET_CPU() usages.

Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-07-09 21:33:02 +02:00
Andreas Färber
182735efaf cpu: Make first_cpu and next_cpu CPUState
Move next_cpu from CPU_COMMON to CPUState.
Move first_cpu variable to qom/cpu.h.

gdbstub needs to use CPUState::env_ptr for now.
cpu_copy() no longer needs to save and restore cpu_next.

Acked-by: Paolo Bonzini <pbonzini@redhat.com>
[AF: Rebased, simplified cpu_copy()]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-07-09 21:32:54 +02:00
Andreas Färber
6e42be7cd1 cpu: Drop unnecessary dynamic casts in *_env_get_cpu()
A transition from CPUFooState to FooCPU can be considered safe,
just like FooCPU::env access in the opposite direction.
The only benefit of the FOO_CPU() casts would be protection against
bogus CPUFooState pointers, but then surrounding code would likely
break, too.

This should slightly improve interrupt etc. performance when going from
CPUFooState to FooCPU.
For any additional CPU() casts see 3556c233d9
(qom: allow turning cast debugging off).

Reported-by: Anthony Liguori <aliguori@us.ibm.com>
Acked-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-07-09 21:20:28 +02:00
Peter Maydell
6291ad77d7 linux-user: Move cpu_clone_regs() and cpu_set_tls() into linux-user
The functions cpu_clone_regs() and cpu_set_tls() are not purely CPU
related -- they are specific to the TLS ABI for a a particular OS.
Move them into the linux-user/ tree where they belong.

target-lm32 had entirely unused implementations, since it has no
linux-user target; just drop them.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-07-09 21:20:28 +02:00
Paolo Bonzini
2c9b15cab1 memory: add owner argument to initialization functions
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-07-04 17:42:44 +02:00
Alexander Graf
2345f1c014 PPC: Ignore writes to L2CR
The L2CR register contains a number of bits that either impose configuration
which we can't deal with or mean "something is in progress until the bit is
0 again".

Since we don't model the former and we do want to accomodate guests using the
latter semantics, let's just ignore writes to L2CR. That way guests always read
back 0 and are usually happy with that.

Signed-off-by: Alexander Graf <agraf@suse.de>
2013-07-01 01:11:17 +02:00
Alexander Graf
9761ad7571 PPC: Introduce an alias cache for faster lookups
When running QEMU with "-cpu ?" we walk through every alias for every
target CPU we know about. This takes several seconds on my very fast
host system.

Let's introduce a class object cache in the alias table. Using that we
don't have to go through the tedious work of finding our target class.
Instead, we can just go directly from the alias name to the target class
pointer.

This patch brings -cpu "?" to reasonable times again.

Before:
  real    0m4.716s

After:
  real    0m0.025s

Signed-off-by: Alexander Graf <agraf@suse.de>
2013-07-01 01:11:17 +02:00
Fabien Chouteau
b177d8b77c PPC: Fix GDB read on code area for PPC6xx
On PPC 6xx, data and code have separated TLBs. Until now QEMU was only
looking at data TLBs, which is not good when GDB wants to read code.

This patch adds a second call to get_physical_address() with an
ACCESS_CODE type of access when the first call with ACCESS_INT fails.

Signed-off-by: Fabien Chouteau <chouteau@adacore.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-07-01 01:11:17 +02:00
Fabien Chouteau
886b757791 PPC: Add dump_mmu() for 6xx
"(qemu) info tlb" is a very useful tool for debugging, so I implemented
the missing 6xx version.

Signed-off-by: Fabien Chouteau <chouteau@adacore.com>
[agraf: fix printfs on hwaddr to PRI]
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-07-01 01:11:17 +02:00
Andreas Färber
b048960f15 target-ppc: Introduce unrealizefn for PowerPCCPU
Use it to clean up the opcode table, resolving a former TODO from Jocelyn.
Also switch from malloc() to g_malloc().

Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-07-01 01:11:16 +02:00
Alexey Kardashevskiy
4bddaf552c target-ppc kvm: save cr register
This adds a missing code to save CR (condition register) via
kvm_arch_put_registers(). kvm_arch_get_registers() already has it.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-07-01 01:11:16 +02:00
Hervé Poussineau
9fea2ae250 ppc: do not register IABR SPR twice for 603e
IABR SPR is already registered in gen_spr_603(), called from init_proc_603E().

Signed-off-by: Hervé Poussineau <hpoussin@reactos.org>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-07-01 01:11:15 +02:00
Andreas Färber
1e3438df5a target-ppc: Drop redundant flags assignments from CPU families
Previous code has #define POWERPC_INSNS2_<family> PPC_NONE in some
places for macrofied assignment to insns_flags2 field.

PPC_NONE is defined as zero though and QOM classes are zero-initialized,
so drop any pcc->insns_flags2 = PPC_NONE; assignments.

PPC_NONE itself is still in use in translate.c.

Suggested-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-07-01 01:11:15 +02:00
Scott Wood
d85937e683 kvm/openpic: in-kernel mpic support
Enables support for the in-kernel MPIC that thas been merged into the
KVM next branch.  This includes irqfd/KVM_IRQ_LINE support from Alex
Graf (along with some other improvements).

Note from Alex regarding kvm_irqchip_create():

  On x86, one would call kvm_irqchip_create() to initialize an
  in-kernel interrupt controller.  That function then goes ahead and
  initializes global capability variables as well as the default irq
  routing table.

  On ppc, we can't call kvm_irqchip_create() because we can have
  different types of interrupt controllers.  So we want to do all the
  things that function would do for us in the in-kernel device init
  handler.

Signed-off-by: Scott Wood <scottwood@freescale.com>
[agraf: squash in kvm_irqchip_commit_routes patch, fix non-kvm build,
        fix ppcemb]
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-07-01 01:11:14 +02:00
Alexander Graf
4be1db8606 PPC: Add non-kvm stub file
There are cases where a kvm provided function is called from generic
hw code that doesn't know whether kvm is available or not. Provide
a stub file which can provide simple replacement functions for those
cases.

Signed-off-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
2013-07-01 01:11:14 +02:00
Andreas Färber
c643bed99f cpu: Change qemu_init_vcpu() argument to CPUState
This allows to move the call into CPUState's realizefn.
Therefore move the stub into libqemustub.a.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-06-28 13:25:13 +02:00
Andreas Färber
878096eeb2 cpu: Turn cpu_dump_{state,statistics}() into CPUState hooks
Make cpustats monitor command available unconditionally.

Prepares for changing kvm_handle_internal_error() and kvm_cpu_exec()
arguments to CPUState.

Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-06-28 13:25:12 +02:00
Andreas Färber
cb446ecab7 kvm: Change cpu_synchronize_state() argument to CPUState
Change Monitor::mon_cpu to CPUState as well.

Reviewed-by: liguang <lig.fnst@cn.fujitsu.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-06-28 13:25:12 +02:00
Scott Wood
8216966004 KVM: PPC: Add dummy kvm_arch_init_irq_routing()
The common KVM code insists on calling kvm_arch_init_irq_routing()
as soon as it sees kernel header support for it (regardless of whether
QEMU supports it).  Provide a dummy function to satisfy this.

Unlike x86, PPC does not have one default irqchip, so there's no common
code that we'd stick here.  Even if you ignore the routes themselves,
which even on x86 are not set up in this function, the initial XICS
kernel implementation will not support IRQ routing, so it's best to
leave even the general feature flags up to the specific irqchip code.

Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2013-06-12 13:19:10 +04:00
Michael Tokarev
997aba8e25 remove some double-includes
Some source files #include the same header more than
once for no good reason.  Remove second #includes in
such cases.

Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2013-05-18 16:35:12 +04:00
Alexander Graf
36f48d9c78 PPC: Depend behavior of cmp instructions only on instruction encoding
When running an L=1 cmp instruction on a 64bit PPC CPU with SF off, it
still behaves identical to what it does when SF is on. Remove the implicit
difference in the code.

Also, on most 32bit CPUs we should always treat the compare as 32bit
compare, as the CPU will ignore the L bit. This is not true for e500mc,
but that's up for a different patch.

Reported-by: Torbjorn Granlund <tg@gmplib.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-05-08 20:23:20 +02:00
Alexander Graf
554ecc5774 PPC: Fix rldcl
The implementation for rldcl tried to always fetch its
parameters from the opcode, even though the opcode was
already passed in in decoded and different forms.

Use the parameters instead, fixing rldcl.

Reported-by: Torbjorn Granlund <tg@gmplib.org>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-05-08 20:23:20 +02:00
Anton Blanchard
04559d5210 target-ppc: Add read and write of PPR SPR
Recent Linux kernels save and restore the PPR across exceptions
so we need to handle it.

Signed-off-by: Anton Blanchard <anton@au1.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-05-06 17:22:48 +02:00
Anton Blanchard
c05541ee19 target-ppc: Fix invalid SPR read/write warnings
Invalid and privileged SPR warnings currently print the wrong
address. While fixing that, also make it clear that we are
printing both the decimal and hexadecimal SPR number.

Before:

  Trying to read invalid spr 896 380 at 0000000000000714

After:

  Trying to read invalid spr 896 (0x380) at 0000000000000710

Signed-off-by: Anton Blanchard <anton@au1.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-05-06 17:22:48 +02:00
Alexander Graf
126a793009 PPC: Add MMU type for 2.06 with AMR but no TB pages
When running -cpu on a POWER7 system with PR KVM, we mask out the 1TB
MMU capability from the MMU type mask, but not the AMR bit.

This leads to us having a new MMU type that we don't check for in our
MMU management functions.

Add the new type, so that we don't have to worry about breakage there.
We're not going to use the TCG MMU management in that case anyway.

The long term fix for this will be to move all these MMU management
functions to class callbacks.

Signed-off-by: Alexander Graf <agraf@suse.de>
2013-05-06 17:22:48 +02:00
Aurelien Jarno
909eedb74f target-ppc: slightly optimize lfiwax
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2013-04-27 00:37:46 +02:00
Aurelien Jarno
7d08d85645 target-ppc: add support for extended mtfsf/mtfsfi forms
Power ISA 2.05 adds support for extended mtfsf/mtfsfi form, with a new
W field to select the upper part of the FPCSR register.

For that the helper is changed to handle 64-bit input values and mask with
up to 16 bits. The mtfsf/mtfsfi instructions do not have the W bit
marked as invalid anymore. Instead this is checked in the helper, which
therefore needs to access to the insns/insns_flags2. They are added in
the DisasContext struct. Finally change all accesses to the opcode fields
through extract helpers, prefixed with FP for consistency.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26 23:02:43 +02:00
Aurelien Jarno
44bc0c4d3e target-ppc: emulate store doubleword pair instructions
Needed for Power ISA version 2.05 compliance. The check for odd register
pairs is done using the invalid bits.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26 23:02:43 +02:00
Aurelien Jarno
05050ee804 target-ppc: emulate load doubleword pair instructions
Needed for Power ISA version 2.05 compliance. The check for odd register
pairs is done using the invalid bits.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26 23:02:43 +02:00
Aurelien Jarno
199f830d19 target-ppc: emulate lfiwax instruction
Needed for Power ISA version 2.05 compliance.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
[agraf: fix tcg debug error]
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26 23:02:43 +02:00
Aurelien Jarno
f03328882f target-ppc: emulate fcpsgn instruction
Needed for Power ISA version 2.05 compliance.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26 23:02:42 +02:00
Aurelien Jarno
725bcec288 target-ppc: emulate prtyw and prtyd instructions
Needed for Power ISA version 2.05 compliance.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
[agraf: fix 32-bit host compile, simplify code]
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26 23:02:42 +02:00
Aurelien Jarno
fcfda20f2f target-ppc: emulate cmpb instruction
Needed for Power ISA version 2.05 compliance.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26 23:02:42 +02:00
Aurelien Jarno
9c2627b09d target-ppc: add instruction flags for Book I 2.05
.. and enable it on POWER7 CPU.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26 23:02:42 +02:00
Aurelien Jarno
bf45a2e67c target-ppc: optimize fabs, fnabs, fneg
fabs, fnabs and fneg are just flipping the bit sign of an FP register,
this can be implemented in TCG instead of using softfloat.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26 23:02:42 +02:00
Alexander Graf
414f5d1448 PPC: Fix dcbz for linux-user on 970
The default with linux-user for dcbz on 970 is to emulate 32 byte clears.
However, redoing the dcbzl support we added a check to not honor the bit
in HID5 that sets this.

Remove the #ifdef check on linux user, so that we get 32 byte clears again.

Reported-by: Riku Voipio <riku.voipio@iki.fi>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26 23:02:42 +02:00
Tristan Gingold
db72c9f256 powerpc: correctly handle fpu exceptions.
Raise the exception on the first occurence, do not wait for the next
floating point operation.

Signed-off-by: Fabien Chouteau <chouteau@adacore.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26 23:02:42 +02:00
David Gibson
9b00ea4906 target-ppc: Synchronize VPA state with KVM
For PAPR guests, KVM tracks the various areas registered with the
H_REGISTER_VPA hypercall.  For full emulation, of course, these are tracked
within qemu.  At present these values are not synchronized.  This is a
problem for reset (qemu's reset of the VPA address is not pushed to KVM)
and will also be a problem for savevm / migration.

The kernel now supports accessing the VPA state via the ONE_REG interface,
this patch adds code to qemu to use that interface to keep the qemu and
KVM ideas of the VPA state synchronized.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26 23:02:41 +02:00
David Gibson
702763fa32 target-ppc: Add more stubs for POWER7 PMU registers
In addition to the performance monitor registers found on nearly all
6xx chips, the POWER7 has two additional counters (PMC5 & PMC6) and an
extra control register (MMCRA).  This patch adds stub support for them to
qemu - the registers won't do anything, but with this change won't cause
illegal instruction traps accessing them.  They're also registered with
their ONE_REG ids, so their value will be kept in sync with KVM where
appropriate.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26 23:02:41 +02:00
David Gibson
0cbad81f70 pseries: Fixes and enhancements to L1 cache properties
PAPR requires that the device tree's CPU nodes have several properties
with information about the L1 cache.  We already create two of these
properties, but with incorrect names - "[id]cache-block-size" instead
of "[id]-cache-block-size" (note the extra hyphen).

We were also missing some of the required cache properties.  This
patch adds the [id]-cache-line-size properties (which have the same
values as the block size properties in all current cases).  We also
add the [id]-cache-size properties.

Adding the cache sizes requires some extra infrastructure in the
general target-ppc code to (optionally) set the cache sizes for
various CPUs.  The CPU family descriptions in translate_init.c can set
these sizes - this patch adds correct information for POWER7, I'm
leaving other CPU types to people who have a physical example to
verify against.  In addition, for -cpu host we take the values
advertised by the host (if available) and use those to override the
information based on PVR.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26 23:02:41 +02:00
David Gibson
f36951c19f pseries: Fix incorrect calculation of RMA size in certain configurations
For the pseries machine, we need to advertise to the guest the size of its
RMA - that is the amount of memory it can access with the MMU off.  For HV
KVM, this is constrained by the hardware limitations on the virtual RMA of
one hash PTE per PTE group in the hash page table.  We already had code to
calculate this, but it was assuming the VRMA page size was the same as the
(host) backing page size for guest RAM.

In the case of a host kernel configured for 64k base page size, but running
on hardware (or firmware) which only allows 4k pages, the hose will do all
its allocations with a 64k page size, but still use 4k hardware pages for
actual mappings.  Usually that's transparent to things running under the
host, but in the case of the maximum VRMA size it's not.

This patch refines the RMA size calculation to instead use the largest
available hardware page size (as reported by the SMMU_INFO call) which is
less than or equal to the backing page size.  This now gives the correct
RMA size in all cases I've tested.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26 23:02:41 +02:00
Bharat Bhushan
31f2cb8ff4 Enable kvm emulated watchdog
Enable the KVM emulated watchdog if KVM supports (use the
capability enablement in watchdog handler). Also watchdog exit
(KVM_EXIT_WATCHDOG) handling is added.
Watchdog state machine is cleared whenever VM state changes to running.
This is to handle the cases like return from debug halt etc.

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
[agraf: rebase to current code base, fix non-kvm cases]
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26 23:02:40 +02:00
Richard Henderson
752d634ecc target-ppc: Fix narrow-mode add/sub carry output
Broken in b5a73f8d8a, the carry itself was
fixed in 79482e5ab3.  But we still need to
produce the full 64-bit addition.

Simplify the conditions at the top of the functions for when we need a
new temporary.  Only plain addition is important enough to warrent avoiding
the temporary, and the extra tcg move op that would come with it.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Tested-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26 23:02:40 +02:00
Fabien Chouteau
2bc173224a PPC: Add breakpoint registers for 603 and e300
Signed-off-by: Fabien Chouteau <chouteau@adacore.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26 23:02:40 +02:00
Fabien Chouteau
09d9828ace PPC: fix hreset_vector for 60x, 7x0, 7x5, G2, MPC8xx, MPC5xx, 7400 and 7450
According to the different user's manuals, the vector offset for system
reset (both /HRESET and /SRESET) is 0x00100.

This patch may break support of some executables, as the power-on start
address may change. For a specific board, if the power-on start address
is different than HRESET vector (i.e. 0x00000100 or 0xfff00100), this
should be fixed in board's initialization code.

Signed-off-by: Fabien Chouteau <chouteau@adacore.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26 23:02:40 +02:00
Aurelien Jarno
8e7a6db965 target-ppc: fix nego and subf*o instructions
The overflow computation of nego and subf*o instructions has been broken
in commit ffe30937. Contrary to other targets, the instruction is subtract
from an not subtract on PowerPC.

This patch fixes the issue by using the correct argument in the xor
computation. Thanks to Peter Maydell for the hint.

With this change the PPC emulation passes the Gwenole Beauchesne
testsuite again.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26 23:02:40 +02:00
Fabien Chouteau
2cf3eb6df5 PPC: Remove env->hreset_excp_prefix
This value is not needed if we use correctly the MSR[IP] bit.

excp_prefix is always 0x00000000, except when the MSR[IP] bit is
implemented and set to 1, in that case excp_prefix is 0xfff00000.

The handling of MSR[IP] was already implemented but not used at reset
because the value of env->msr was changed "manually".

The patch uses the function hreg_store_msr() to set env->msr, this
ensures a good handling of MSR[IP] at reset, and therefore a good value
for excp_prefix.

Signed-off-by: Fabien Chouteau <chouteau@adacore.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26 23:02:40 +02:00
Stuart Yoder
3b961124bf PPC: e500: advertise 4.2 MPIC only if KVM supports EPR
Older KVM versions don't support EPR which breaks guests when we announce
MPIC variants that support EPR.

Catch that case and expose only MPIC version 2.0 which tells the guest that
we don't support the EPR capability yet.

Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>
[agraf: Add comment, route cap check through kvm_ppc.c]
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26 23:02:40 +02:00
Aurelien Jarno
e71ec2e93d target-ppc: Enable ISEL on POWER7
ISEL is a Power ISA 2.06 instruction and thus is available on POWER7.
Given this is trapped and emulated by the Linux kernel, I guess it went
unnoticed.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26 23:02:39 +02:00
Paolo Bonzini
b421d9c6ab memory: move core typedefs to qemu/typedefs.h
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-04-15 18:19:26 +02:00
Paolo Bonzini
0d09e41a51 hw: move headers to include/
Many of these should be cleaned up with proper qdev-/QOM-ification.
Right now there are many catch-all headers in include/hw/ARCH depending
on cpu.h, and this makes it necessary to compile these files per-target.
However, fixing this does not belong in these patches.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-04-08 18:13:10 +02:00
Richard Henderson
9ca3f7f316 target-ppc: Use NARROW_MODE macro for tlbie
Removing conditional compilation in the process.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:54 +01:00
Richard Henderson
c791fe8436 target-ppc: Use NARROW_MODE macro for addresses
Removing conditional compilation in the process.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:54 +01:00
Richard Henderson
02765534f7 target-ppc: Use NARROW_MODE macro for comparisons
Removing conditional compilation in the process.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:54 +01:00
Richard Henderson
e0c8f9ce85 target-ppc: Use NARROW_MODE macro for branches
Removing conditional compilation in the process.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:54 +01:00
Richard Henderson
79482e5ab3 target-ppc: Fix add and subf carry generation in narrow mode
The set of computations used in b5a73f8d8a
are only valid if the current word size == target_long size.  This failed
to take ppc64 in 32-bit (narrow) mode into account.

Add a NARROW_MODE macro to avoid conditional compilation.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:54 +01:00
David Gibson
b632a148b6 target-ppc: Use QOM method dispatch for MMU fault handling
After previous cleanups, the many scattered checks of env->mmu_model in
the ppc MMU implementation have, at least for "classic" hash MMUs been
reduced (almost) to a single switch at the top of
cpu_ppc_handle_mmu_fault().

An explicit switch is still a pretty ugly way of handling this though.  Now
that Andreas Färber's CPU QOM cleanups for ppc have gone in, it's quite
straightforward to instead make the handle_mmu_fault function a QOM method
on the CPU object.

This patch implements such a scheme, initializing the method pointer at
the same time as the mmu_model variable.  We need to keep the latter around
for now, because of the MMU types (BookE, 4xx, et al) which haven't been
converted to the new scheme yet, and also for a few other uses.  It would
be good to clean those up eventually.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:53 +01:00
David Gibson
eb20c1c6da target-ppc: Move ppc tlb_fill implementation into mmu_helper.c
For softmmu builds the interface from the generic code to the target
specific MMU implementation is through the tlb_fill() function.  For ppc
this is currently in mem_helper.c, whereas it would make more sense in
mmu_helper.c.  This patch moves it, which also allows
cpu_ppc_handle_mmu_fault() to become a local function in mmu_helper.c

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:53 +01:00
David Gibson
cc8eae8ac7 target-ppc: Split user only code out of mmu_helper.c
mmu_helper.c is, for obvious reasons, almost entirely concerned with
softmmu builds of qemu.  However, it does contain one stub function which
is used when CONFIG_USER_ONLY=y - the user only versoin of
cpu_ppc_handle_mmu_fault, which always triggers an exception.  The entire
rest of the file is surrounded by #if !defined(CONFIG_USER_ONLY).

We clean this up by moving the user only stub into its own new file,
removing the ifdefs and building mmu_helper.c only when CONFIG_SOFTMMU
is set.  This also lets us remove the #define of cpu_handle_mmu_fault to
cpu_ppc_handle_mmu_fault - that name is only used from generic code for
user only - so we just name our split user version by the generic name.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:53 +01:00
David Gibson
f80872e21c mmu-hash64: Implement Virtual Page Class Key Protection
Version 2.06 of the Power architecture describes an additional page
protection mechanism.  Each virtual page has a "class" (0-31) recorded in
the PTE.  The AMR register contains bits which can prohibit reads and/or
writes on a class by class basis.  Interestingly, the AMR is userspace
readable and writable, however user mode writes are masked by the contents
of the UAMOR which is privileged.

This patch implements this protection mechanism, along with the AMR and
UAMOR SPRs.  The architecture also specifies a hypervisor-privileged AMOR
register which masks user and supervisor writes to the AMR and UAMOR.  We
leave this out for now, since we don't at present model hypervisor mode
correctly in any case.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[agraf: fix 32-bit hosts]
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:53 +01:00
David Gibson
caa597bd9f mmu-hash*: Merge translate and fault handling functions
ppc_hash{32,64}_handle_mmu_fault() is now the only caller of
ppc_hash{32,64{_translate(), so this patch combines them together.  This
means that instead of one returning a variety of non-obvious error codes
which then get translated into the various mmu exception conditions, we can
just generate the exceptions as we discover problems in the translation
path.  This also removes the last usage of mmu_ctx_hash{32,64}.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:53 +01:00
David Gibson
5883d8b296 mmu-hash*: Don't use full ppc_hash{32, 64}_translate() path for get_phys_page_debug()
Currently the hash mmu versionsof get_phys_page_debug() use the same
ppc64_hash64_translate() function to do the translation logic as the normal
mm fault handler code.

That sounds like a good idea, but has some complications. The debug path
doesn't need, or even want some parts of the full translation path, like
permissions checking.  Furthermore, the pte flags update included in the
normal path means that the debug call is not quite side effect free.

This patch, therefore, reimplements get_phys_page_debug as the minimal
required subset of the full translation path.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>`z
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:53 +01:00
David Gibson
75d5ec89c0 mmu-hash*: Correctly mask RPN from hash PTE
BEHAVIOUR CHANGE

At present we take the whole of word 1 of the hash PTE as the real page
number used to calculate the translated address.  This is incorrect,
because it leaves the flags from the low bits of PTE word 1 in place in the
rpm.  We mostly get away with that because the value is later masked by
TARGET_PAGE_MASK.

More recent 64-bit CPUs also have a small number of flag bits (PP0 and
KEY) in the top bits of PTE word 1.  Any guest which used those bits would
fail with the current code.

This patch fixes the problem by correctly masking out the RPN field of
PTE word 1.  This is safe, even for older CPUs which didn't have PP0 and
KEY, because although the RPN notionally extended to the very top of PTE
word 1, none of those CPUs actually implemented that many real address
bits.

We add analogous masking to the 32-bit code, even though it also doesn't
have the high flag bits, for consistency and clarity.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:53 +01:00
David Gibson
6d11d998bb mmu-hash*: Clean up real address calculation
More recent 64-bit hash MMUs support multiple page sizes, and PTEs for
large pages only include the offset of the whole large page.  But the qemu
tlb only handles pages of the base size (4k) so we need to break up the
large pages into 4k pieces for the qemu tlb.  To do that we have a somewhat
awkward piece of code that adds the folds address bits 4k and the page size
from the virtual address into the real address from the pte.

This patch simplifies this redefining the raddr output of
ppc_hash64_translate() to be the full real address of the faulting address,
rather than just the (4k) page offset.  Computing that turns out to be
simpler, and is fine for the caller, since it already masks with
TARGET_PAGE_MASK before inserting into the qemu tlb.

The multiple page size complication doesn't exist for 32-bit hash mmus, but
we make an analogous cleanup there for consistency.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:52 +01:00
David Gibson
b344074642 mmu-hash*: Clean up PTE flags update
Currently the ppc_hash{32,64}_pte_update_flags() helper functions update a
PTE's referenced and changed bits as necessary to reflect the access.  It
is somewhat long winded, though.  This patch open codes them in their
(single) callers, in a simpler way.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:52 +01:00
David Gibson
57d0a39d98 mmu-hash64: Factor SLB N bit into permissions bits
BEHAVIOUR CHANGE

Currently, for 64-bit hash mmu, the execute protection bit placed into the
qemu tlb is based only on the N (No execute) bit from the PTE.  However,
No Execute can also be set at the segment level.  We do check this on
execute faults, but this still means we could incorrectly allow execution
of code from a No Execute segment, if a prior read or write fault caused
the page to be loaded into the qemu tlb with PROT_EXEC set.

To correct this, we (re-)check the segment level no execute permission when
generating the protection bits for the qemu tlb.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:52 +01:00
David Gibson
e01b444523 mmu-hash*: Clean up permission checking
Currently checking of PTE permission bits is split messily amongst
ppc_hash{32,64}_pp_check(), ppc_hash{32,64}_check_prot() and their callers.
This patch cleans this up to have the new function
ppc_hash{32,64}_pte_prot() compute the page permissions from the SLBE (for
64-bit) or segment register (32-bit) and the pte.  A greatly simplified
version of the actual permissions check is then open coded in the callers.

The 32-bit version of ppc_hash32_pte_prot() is implemented in terms of
ppc_hash32_pp_prot(), a renamed and slightly cleaned up version of the old
ppc_hash32_pp_check(), which is also used for checking BAT permissions on
the 601.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:52 +01:00
David Gibson
e1a53ba2e0 mmu-hash32: Remove nx from context structure
Previous cleanups have meant the nx field of the mmu_ctx_hash32 structure
is now only used within ppc_hash32_translate(), and so it can be replaced
by a local variable.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:52 +01:00
David Gibson
87dc3fd13e mmu-hash*: Don't update PTE flags when permission is denied
BEHAVIOUR CHANGE

Currently if ppc_hash{32,64}_translate() finds a PTE matching the given
virtual address, it will always update the PTE's R & C (Referenced and
Changed) bits.  This happens even if the PTE's permissions mean we are
about to deny the translation.

This is clearly a bug, although we get away with it because:
  a) It will only incorrectly set, never reset the bits, which should not
cause guest correctness problems.
  b) Linux guests never use the R & C bits anyway.

This patch fixes the behaviour, only updating R & C when access is granted
by the PTE.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:52 +01:00
David Gibson
59acbe2855 mmu-hash32: Don't look up page tables on BAT permission error
BEHAVIOUR CHANGE

Currently, on any failure translating an address with BATs, we proceed to
normal segment and page table translation.  That's incorrect if the
BAT error was due to permissions, rather than not finding a matching BAT.
We've gotten away with it because a guest would not usually put
translations for the same address in both BATs and page table.  Nonetheless
this patch corrects the logic, only doing page table lookup if no BAT
is found.  A matching BAT with bad permissions will now correctly trigger
an exception.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:51 +01:00
David Gibson
145e52f318 mmu-hash32: Cleanup BAT lookup
This patch makes a general cleanup of the ppc_hash32_get_bat() function,
renaming it to ppc_hash32_bat_lookup().  In particular, the new function
only looks for a matching BAT, with the permissions check from the old
function moved to the caller.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:51 +01:00
David Gibson
6fc76aa9ad mmu-hash32: Clean up BAT matching logic
The code to search for a matching BAT for a virtual address is somewhat
longwinded and awkward.  In particular, it relies on seperate size and
validity information being returned from the hash32_bat_size() function
(and 601 specific variant).

We simplify this by having hash32_bat_size() return instead a mask of the
virtual address bits to match, and 0 for invalid (since a BAT can never
match the entire address space).

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:51 +01:00
David Gibson
e1d4951593 mmu-hash32: Split BAT size logic from permissions logic
hash32_bat_size_prot() and its 601 variant, as the name suggests, returns
both a BAT's size - needed to search for a matching BAT - and its
permissions, only relevant once a matching BAT has been located.

There's no particular advantage to combining these, so we split these roles
into seperate functions for clarity.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:51 +01:00
David Gibson
9986ed1ed0 mmu-hash32: Remove odd pointer usage from BAT code
In the code for handling BATs, the hash32_bat_size_prot() and
hash32_bat_601_size_prot() functions are passed the BAT contents by
reference (pointer) for no clear reason, since they only need the values
within.

This patch removes this odd usage, and uses the resulting change to clean
up the caller slightly.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:51 +01:00
David Gibson
6a9801106e mmu-hash*: Fold pte_check*() logic into caller
With previous cleanups made, the 32-bit and 64-bit pte_check*() functions
are pretty trivial and only have one call site.  This patch therefore
clarifies the overall code flow by folding those functions into their
call site.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:51 +01:00
David Gibson
1814889876 mmu-hash64: Clean up ppc_hash64_htab_lookup()
This patch makes a general cleanup of the address mangling logic in
ppc_hash64_htab_lookup().  In particular it now avoids repeatedly switching
on the segment size.  The lack of SLB and multiple segment sizes on 32-bit
means an analogous cleanup is not needed there.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:50 +01:00
David Gibson
7f3bdc2d8e mmu-hash*: Remove permission checking from find_pte{32, 64}()
find_pte{32,64}() are poorly named, since they both find a PTE and do
permissions checking of it.  This patch makes them only locate a matching
PTE, moving the permission checking and other logic to the caller.  We
rename the resulting search functions ppc_hash{32,64}_htab_lookup().

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:50 +01:00
David Gibson
a1ff751abd mmu-hash*: Make find_pte{32, 64} do more of the job of finding ptes
find_pte{32,64}() are not particularly well named.  They only "find" a PTE
within a given PTE group, and they also do permissions checking and other
things.

This patch makes it somewhat close to matching the name, by folding the
search of both primary and secondary hash bucket into it, along with the
various address bit shuffling to determine the right hash buckets.

In the 32-bit case we also remove the code for splitting large pages into
4k pieces for the qemu tlb, since no 32-bit hash MMUs support multiple page
sizes.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:50 +01:00
David Gibson
aea390e4be mmu-hash*: Separate PTEG searching from permissions checking
find_pte{32,64{() do several things.  First they search through a PTEG
ooking for a PTE matching our virtual address.  Then they do permissions
checking and other processing on that PTE.

This patch separates the search by VA out from the rest.  The search is
combined with the pte{32,64}_match() functions into new
ppc_has{32,64}_pteg_search() functions.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:50 +01:00
David Gibson
f95d7cc7fe mmu-hash*: Don't keep looking for PTEs after we find a match
BEHAVIOUR CHANGE

The ppc hash mmu hashes each virtual address to a primary and secondary
possible hash bucket (aka PTE group or PTEG) each with 8 PTEs.  Then we
need a linear search through the PTEs to find the correct one for the
virtual address we're translating.

It is a programming error for the guest to insert multiple PTEs mapping the
same virtual address into a PTEG - in this case the ppc architecture says
the MMU can either act as if just one was present, or give a machine check.
Currently our code takes the first matching PTE in a PTEG if it finds a
successful translation.  But if a matching PTE is found, but permission
bits don't allow the access, we keep looking through the PTEG, checking
that any other matching PTEs contain an identical translation.

That behaviour is perhaps not exactly wrong, but it's certainly not useful.
This patch changes it to always just find the first matching PTE in a PTEG.

In addition, if we get a permissions problem on the primary PTEG, we then
search the secondary PTEG.  This is incorrect - a permission denying PTE
in the primary PTEG should not be overwritten by an access granting PTE in
the secondary (although again, it would be a programming error for the
guest to set up such a situation anyway).  So additionally we update the
code to only search the secondary PTEG if no matching PTE is found in the
primary at all.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:50 +01:00
David Gibson
bb218042c8 mmu-hash*: Cleanup segment-level NX check
On the ppc hash mmus, no-execute can be set at the segment level (on more
recent 64-bit hash mmus it can also be set at the page level).  This patch
separates out this check to make it clearer what is going on, and avoiding
excessive indentation of the remaining translation code.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:49 +01:00
David Gibson
723ed73ada mmu-hash32: Split direct store segment handling into a helper
This further separates the unusual case handling of direct store segments
from the main translation path by moving its logic into a helper function,
with some tiny cleanups along the way.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:49 +01:00
David Gibson
4b9605a5b1 mmu-hash32: Split out handling of direct store segments
At present a large chunk of ppc_hash32_translate() is taken up with an
ugly if selecting between direct store segments (hardly ever used) and
normal paged segments.  This patch clarifies the flow of code by
handling direct store segments immediately then returning, leaving the
straight line code to describe the normal MMU path.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:49 +01:00
David Gibson
65d61643d0 mmu-hash*: Combine ppc_hash{32, 64}_get_physical_address and get_segment{32, 64}()
After previous work, ppc_hash{32,64}_get_physical_address() are almost
trivial wrappers around get_segment{32,64}() which does nearly all the work of
translating an address according to the hash mmu model.  Therefore combine the
two functions into one, under the better name of
ppc_hash{32,64}_translate().

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:49 +01:00
David Gibson
f078cd46de mmu-hash*: Remove eaddr field from mmu_ctx_hash{32, 64}
The eaddr field of mmu_ctx_hash{32,64} is effectively just used to pass the
effective address from get_segment{32,64}() to find_pte{32,64}().  Just
pass it as a normal parameter instead.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:49 +01:00
David Gibson
ba36ed1005 mmu-hash64: Remove nx from mmu_ctx_hash64
The nx field in mmu_ctx_hash64 is used in two different functions.  But its
used for slightly different things in each place, and the value is never
propagated between them.  In other words, it might as well be two local
variables.  This patch makes it so.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:49 +01:00
David Gibson
91cda45b69 mmu-hash*: Reduce use of access_type
In ppc env->access_type is updated by e.g. integer load/stores with
ACCESS_INT floating point load/stores with ACCESS_FLOAT and so forth.  In
hash mmu fault paths it can also b set to ACCESS_CODE for instruction
fetch accesses.

But the only place which uses anything more of the access_type than
whether it is instruction fetch or data access is the direct store segment
handling.  Instruction versus data access can be more simply determined
from the rw value passed down from the top.

This changes the code to use rw in preference to checking access_type.
For the 32-bit case there is a small amount of code (for direct store
segments) that still needs the full access type.  Instead of passing it
all the way down the stack, we retrieve it from the env structure, which
is where it came anyway, before this patch.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:49 +01:00
David Gibson
dffdaf6162 mmu-hash*: Add hash pte load/store helpers
On real hardware the ppc hash page table is stored in memory; accordingly
our mmu emulation code can read a hash page table in guest memory.  But,
when paravirtualized under PAPR, the real hash page table is in host
memory, accessible to the guest only via hypercalls.  We model this by
also allowing the MMU emulation code to access a specially allocated hash
page table outside the guest's memory image. At present these two options
are implemented with some ugly conditionals at each access point in the mmu
emulation code.  In the implementation of the PAPR hypercalls, we assume
the external hash table.

This patch cleans things up by adding helpers to load and store from the
hash table for both 32-bit and 64-bit hash mmus.  The 64-bit versions
handle both the in-guest-memory and outside guest memory cases.  The 32-bit
versions only handle the in-guest-memory case since no 32-bit systems can
have an external hash table at present.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:48 +01:00
David Gibson
d5aea6f367 mmu-hash*: Add header file for definitions
Currently cpu.h contains a number of definitions relating to the 64-bit
hash MMU.  Some are used in the MMU emulation code, but some are only used
in the spapr MMU management hcall implementations.

This patch moves these definitions (except for a few that are needed
more widely) into mmu-hash64.h header, shared between the MMU emulation
code and the spapr hcall code.  The MMU emulation code is also updated to
actually use a number of those definitions in place of hard coded
constants.

Similarly, we add new analogous definitions to mmu-hash32.h and use those
in place of many hard-coded constants in mmu-hash32.c

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[agraf: fix 32-bit hosts]
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:48 +01:00
David Gibson
5dc68eb0e4 target-ppc: mmu_ctx_t should not be a global type
mmu_ctx_t is currently defined in cpu.h.  However it is used for temporary
information relating to mmu translation, and is only used in mmu_helper.c
and (now) mmu-hash{32,64}.c.  Furthermore it contains information which
should be specific to particular MMU types.  Therefore, move its definition
to mmu_helper.c.  mmu-hash{32,64}.c are converted to use new data types
private to the relevant MMUs (identical to mmu_ctx_t for now, but that will
change in future patches).

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:48 +01:00
David Gibson
9813279664 target-ppc: Disentangle BAT code for 32-bit hash MMUs
The functions for looking up BATs (Block Address Translation - essentially
a level 0 TLB) are shared between the classic 32-bit hash MMUs and the
6xx style software loaded TLB implementations.

This patch splits out a copy for the 32-bit hash MMUs, to facilitate
cleaning it up.  The remaining version is left, but cleaned up slightly
to no longer deal with PowerPC 601 peculiarities (601 has a hash MMU).

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:48 +01:00
David Gibson
59191721a1 target-ppc: Don't share get_pteg_offset() between 32 and 64-bit
The get_pteg_offset() helper function is currently shared between 32-bit
and 64-bit hash mmus, taking a parameter for the hash pte size.  In the
64-bit paths, it's only called in one place, and it's a trivial
calculation.  This patch, therefore, open codes it for 64-bit.  The
remaining version, which is used in two places is made 32-bit only and
moved to mmu-hash32.c.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:48 +01:00
David Gibson
496272a701 target-ppc: Disentangle hash mmu helper functions
The newly separated paths for hash mmus rely on several helper functions
which are still shared with 32-bit hash mmus: pp_check(), check_prot() and
pte_update_flags().  While these don't have ugly ifdefs on the mmu type,
they're not very well thought out, so sharing them impedes cleaning up the
hash mmu paths.  For now, put near-duplicate versions into mmu-hash64.c and
mmu-hash32.c, leaving the old version in mmu_helper.c for 6xx software
loaded tlb implementations.  The hash 32 and software loaded
implementations are simplfied slightly, using the fact that no 32-bit CPUs
implement the 3rd page protection bit.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:48 +01:00
David Gibson
f2ad6be83b target-ppc: Disentangle hash mmu versions of cpu_get_phys_page_debug()
cpu_get_phys_page_debug() is a trivial wrapper around
get_physical_address().  But even the signature of
get_physical_address() has some things we'd like to clean up on a
per-mmu basis, so this patch moves the test on mmu model out to
cpu_get_phys_page_debug(), moving the version for 64-bit hash MMUs out
to mmu-hash64.c and the version for 32-bit hash MMUs to mmu-hash32.c

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22 15:28:48 +01:00