This patch adds the VSX scalar move instructions:
- xsabsdp (Scalar Absolute Value Double-Precision)
- xsnabspd (Scalar Negative Absolute Value Double-Precision)
- xsnegdp (Scalar Negate Double-Precision)
- xscpsgndp (Scalar Copy Sign Double-Precision)
A common generator macro (VSX_SCALAR_MOVE) is added since these
instructions vary only slightly from each other.
Macros to support VSX XX2 and XX3 form opcodes are also added.
These macros handle the overloading of "opcode 2" space (instruction
bits 26:30) caused by AX and BX bits (29 and 30, respectively).
V3: Per feedback from Paolo Bonzini, moved the sign mask into a
temporary and used andc.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
So far POWER7+ was a part of POWER7 family. However it has a different
PVR base value so in order to support PVR masks, it needs a separate
family class.
This adds a new family class, PVR base and mask values and moves
Power7+ v2.1 CPU to a new family. The class init function is copied
from the POWER7 family.
This defines a firmware name for the new family as "PowerPC,POWER7+"
instead of previously used "PowerPC,POWER7" from the POWER7 family.
The reason for that is that the Sapphire firmware (a h0st firmware)
uses "PowerPC,POWER7+" already and since no specification defines
exactly the CPU nodes naming in the device tree, we better stay
in sync with the host firmware.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch adds the Store VSX Vector Word*4 Indexed (stxvw4x)
instruction.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch adds the Store VSX Scalar Doubleword Indexed (stxsdx)
instruction.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch adds the Load VSX Vector Word*4 Indexed (lxvw4x)
instruction.
V2: changed to use deposit_i64 per Richard Henderson's review.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch adds the Load VSX Vector Doubleword & Splat Indexed
(lxvdsx) instruction.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch adds the Load VSX Scalar Doubleowrd Indexed (lxsdx)
instruction.
The lower 8 bytes of the target register are undefined; this
implementation leaves those bytes unaltered.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch adds the xxpermdi instruction. The instruction
uses bits 22, 23, 29 and 30 for non-opcode fields (DM, AX
and BX). This results in overloading of the opcode table
with aliases, which can be seen in the GEN_XX3FORM_DM
macro.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch adds the stxvd2x instruction.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch adds the lxvd2x instruction.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch adds VSX VSRs to the the list of global register indices.
More specifically, it adds the lower halves of the first 32 VSRs to
the list of global register indices. The upper halves of the first
32 VSRs are already defined via cpu_fpr[]. And the second 32 VSRs
are already defined via the cpu_avrh[] and cpu_avrl[] arrays.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch adds decoders for the VSX fields XT, XS, XA, XB and
DM. The first four are split fields and a general helper for
these types of fields is also added.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch adds support for the VSX bit of the PowerPC Machine
State Register (MSR) as well as the corresponding VSX Unavailable
exception.
The VSX bit is added to the defined bits masks of the Power7 and
Power8 CPU models.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch adds the flag POWERPC_FLAG_VSX to the list of defined
flags and also adds this flag to the list of supported features of
the Power7 and Power8 CPUs. Additionally, the VSX instructions
are added to the list of TCG-enabled instruction.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
IBM POWERPC processors encode PVR as a CPU family in higher 16 bits and
a CPU version in lower 16 bits. Since there is no significant change
in behavior between versions, there is no point to add every single CPU
version in QEMU's CPU list. Also, new CPU versions of already supported
CPU won't break the existing code.
This adds PVR value/mask support for KVM, i.e. for -cpu host option.
As CPU family class name for POWER7 is "POWER7-family", there is no need
to touch aliases.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
The latest update to v3.13-rc3 (bf63839f) breaks the
ppc build with KVM:
kvm-all.o: In function `kvm_update_guest_debug':
kvm-all.c:1910: undefined reference to `kvm_arch_update_guest_debug'
kvm-all.o: In function `kvm_insert_breakpoint':
kvm-all.c:1937: undefined reference to `kvm_arch_insert_sw_breakpoint'
kvm-all.c:1945: undefined reference to `kvm_arch_insert_hw_breakpoint'
kvm-all.o: In function `kvm_remove_breakpoint':
kvm-all.c:1977: undefined reference to `kvm_arch_remove_sw_breakpoint'
kvm-all.c:1985: undefined reference to `kvm_arch_remove_hw_breakpoint'
kvm-all.o: In function `kvm_remove_all_breakpoints':
kvm-all.c:2009: undefined reference to `kvm_arch_remove_sw_breakpoint'
kvm-all.c:2006: undefined reference to `kvm_arch_remove_sw_breakpoint'
kvm-all.c:2017: undefined reference to `kvm_arch_remove_all_hw_breakpoints'
We need stubs until something gets implemented.
Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Reviewed-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Instead of relying on cpu_model, obtain the device tree node label
per CPU. Use DeviceClass::fw_name as source.
Whenever DeviceClass::fw_name is unknown, default to "PowerPC,UNKNOWN".
As a consequence, spapr_fixup_cpu_dt() can operate on each CPU's fw_name,
obsoleting sPAPREnvironment::cpu_model, and spapr_create_fdt_skel() can
drop its cpu_model argument.
Signed-off-by: Prerna Saxena <prerna@linux.vnet.ibm.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Set the expected values for POWER7, POWER7+, POWER8 and POWER5+.
Note that POWER5+ and POWER7+ are intentionally lacking the '+', so the
lack of a POWER7P family constitutes no problem.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch add support for dumping guest memory using dump-guest-memory
monitor command.
Before patch:
(qemu) dump-guest-memory testcrash
this feature or command is not currently supported
(qemu)
After patch:
(qemu) dump-guest-memory testcrash
(qemu)
crash was able to read the file
crash> bt
PID: 0 TASK: c000000000c0d0d0 CPU: 0 COMMAND: "swapper/0"
R0: 0000000028000084 R1: c000000000cafa50 R2: c000000000cb05b0
R3: 0000000000000000 R4: c000000000bc4cb0 R5: 0000000000000000
R6: 001efe93b8000000 R7: 0000000000000000 R8: 0000000000000000
R9: b000000000001032 R10: 0000000000000001 R11: 0001eb2117e00d55
....
...
NOTE: Currently crash tools doesn't look at ELF notes in the dump on ppc64.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Instead of opencoding 64 use MAX_SLB_ENTRIES. We don't update the kernel
header here.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Without this, a value of rb=0 and rs=0 results in replacing the 0th
index. This can be observed when using gdb remote debugging support.
(gdb) x/10i do_fork
0xc000000000085330 <do_fork>: Cannot access memory at address 0xc000000000085330
(gdb)
This is because when we do the slb sync via kvm_cpu_synchronize_state,
we overwrite the slb entry (0th entry) for 0xc000000000085330
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Recent PowerKVM allows the kernel to intercept some RTAS calls from the
guest directly. This is used to implement the more efficient in-kernel
XICS for example. qemu is still responsible for assigning the RTAS token
numbers however, and needs to tell the kernel which RTAS function name is
assigned to a given token value. This patch adds a convenience wrapper for
the KVM_PPC_RTAS_DEFINE_TOKEN ioctl() which is used for this purpose.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Commit 2345f1c01 was supposed to render L2CR writes into noops. Instead,
it made them illegal instruction traps which apparently didn't confuse
XNU, but can easily confuse other OSs.
Fix it up by actually doing nothing when we write to L2CR.
Reported-by: Julio Guerra <guerr@julio.in>
Signed-off-by: Alexander Graf <agraf@suse.de>
Tested-by: Julio Guerra <guerr@julio.in>
The Load Vector Element (lve*x) and Store Vector Element (stve*x)
instructions not only byte-swap in Little Endian mode, they also
invert the element that is accessed. For example, the RTL for
lvehx contains this:
eb <-- EA[60:63]
if Big-Endian byte ordering then
VRT[8*eb:8*eb+15] <-- MEM(EA,2)
else
VRT[112-(8*eb):127-(8*eb)] <-- MEM(EA,2)
This patch adds the element inversion, as described in the last line
of the RTL.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Reviewed-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
# By Peter Maydell (3) and Ákos Kovács (2)
# Via Paolo Bonzini
* bonzini/configure:
ui/Makefile.objs: delete unnecessary cocoa.o dependency
default-configs/: CONFIG_GDBSTUB_XML removed
Makefile.target: CONFIG_NO_* variables removed
rules.mak: New string testing functions
rules.mak: New logical functions for handling y/n values
CONFIG_NO_* variables replaced with the lnot logical function
Signed-off-by: Ákos Kovács <akoskovacs@gmx.com>
[PMM: fixed a few CONFIG_NO_* uses that were missed]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
# By Richard Henderson
# Via Richard Henderson
* rth/tcg-pull:
exec: Add both big- and little-endian memory helpers
tcg: Add qemu_ld_st_i32/64
tcg: Add TCGMemOp
configure: Remove CONFIG_QEMU_LDST_OPTIMIZATION
tcg: Add tcg-be-ldst.h
tcg: Add tcg-be-null.h
exec: Delete is_tcg_gen_code and GETRA_EXT
tcg-aarch64: Update to helper_ret_*_mmu routines
tcg: Merge tcg_register_helper into tcg_context_init
tcg: Add tcg-runtime.c helpers to all_helpers
tcg: Put target helper data into an array.
tcg: Remove stray semi-colons from target-*/helper.h
tcg: Move helper registration into tcg_context_init
target-m68k: Rename helpers.h to helper.h
tcg: Use a GHashTable for tcg_find_helper
tcg: Delete tcg_helper_get_name declaration
tcg-hppa: Remove tcg backend
Message-id: 1381440525-6666-1-git-send-email-rth@twiddle.net
Signed-off-by: Anthony Liguori <aliguori@amazon.com>
During GEN_HELPER=1, these are actually stray top-level semi-colons
which are technically invalid ISO C, but GCC accepts as an extension.
If we added enough __extension__ markers that we could dare use
-Wpedantic, we'd see
warning: ISO C does not allow extra ‘;’ outside of a function
This will become a hard error in the next patch, wherein those ; will
appear in the middle of a data structure.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Since this is only read in cpu_copy() and linux-user has a global
cpu_model, drop the field from generic code.
Signed-off-by: Andreas Färber <afaerber@suse.de>
The x86 and ppc targets call cpu_synchronize_state() from their
*_cpu_dump_state() callbacks to ensure that up to date state is dumped
when KVM is enabled (for example when a KVM internal error occurs).
Move this call up into the generic cpu_dump_state() function so that
other KVM targets (namely MIPS) can take advantage of it.
This requires kvm_cpu_synchronize_state() and cpu_synchronize_state() to
be moved out of the #ifdef NEED_CPU_H in <sysemu/kvm.h> so that they're
accessible to qom/cpu.c.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Andreas Färber <afaerber@suse.de>
Cc: Alexander Graf <agraf@suse.de>
Cc: Gleb Natapov <gleb@redhat.com>
Cc: qemu-ppc@nongnu.org
Cc: kvm@vger.kernel.org
Signed-off-by: Gleb Natapov <gleb@redhat.com>
* 'tcg-next' of git://github.com/rth7680/qemu: (29 commits)
tcg-i386: Make use of zero-extended memory helper routines
tcg: Introduce zero and sign-extended versions of load helpers
exec: Split softmmu_defs.h
target: Include softmmu_exec.h where forgotten
exec: Rename USUFFIX to LSUFFIX
tcg-i386: Don't perform GETPC adjustment in TCG code
exec: Reorganize the GETRA/GETPC macros
configure: Allow x32 as a host
tcg-i386: Adjust tcg_out_tlb_load for x32
tcg-i386: Use intptr_t appropriately
tcg: Fix jit debug for x32
tcg: Use appropriate types in tcg_reg_alloc_call
tcg: Change tcg_out_ld/st offset to intptr_t
tcg: Change tcg_gen_exit_tb argument to uintptr_t
tcg: Use uintptr_t in TCGHelperInfo
tcg: Change relocation offsets to intptr_t
tcg: Change memory offsets to intptr_t
tcg: Change frame pointer offsets to intptr_t
tcg: Define TCG_ptr properly
tcg: Define TCG_TYPE_PTR properly
...
Several targets forgot to include softmmu_exec.h, which would
break them with a header cleanup to follow.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The function qemu_notify_event is defined by a header that we don't
include in the PPC KVM code. Include it to get the code building
again.
target-ppc/kvm_ppc.c: In function 'kvmppc_timer_hack':
target-ppc/kvm_ppc.c:26:5: error: implicit declaration of function 'qemu_notify_event' [-Werror=implicit-function-declaration]
target-ppc/kvm_ppc.c:26:5: error: nested extern declaration of 'qemu_notify_event' [-Werror=nested-externs]
Signed-off-by: Alexander Graf <agraf@suse.de>
Use SLB_ESID_V instead of (1 << 27) in the code
Reviewed-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Bit extraction for the FP BF and L field of the MTFSFI and MTFSF
instructions is wrong and doesn't match the reference manual (which
explain the bit number in big endian format). It has been broken in
commit 7d08d85645.
This patch fixes this, which in turn fixes the problem reported by
Khem Raj about the floor() function of libm.
Reported-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
CC: qemu-stable@nongnu.org (1.6)
Signed-off-by: Alexander Graf <agraf@suse.de>
Add MSR_LE to the msr_mask for POWER7.
Signed-off-by: Anton Blanchard <anton@samba.org>
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
On POWER7, LPCR_ILE is used to control what endian guests take
their exceptions in so use it instead of MSR_ILE.
Signed-off-by: Anton Blanchard <anton@samba.org>
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
# By Alex Bligh (32) and others
# Via Stefan Hajnoczi
* stefanha/block: (42 commits)
win32-aio: drop win32_aio_flush_cb()
aio-win32: replace incorrect AioHandler->opaque usage with ->e
aio / timers: remove dummy_io_handler_flush from tests/test-aio.c
aio / timers: Remove legacy interface
aio / timers: Switch entire codebase to the new timer API
aio / timers: Add scripts/switch-timer-api
aio / timers: Add test harness for AioContext timers
aio / timers: convert block_job_sleep_ns and co_sleep_ns to new API
aio / timers: Convert rtc_clock to be a QEMUClockType
aio / timers: Remove main_loop_timerlist
aio / timers: Rearrange timer.h & make legacy functions call non-legacy
aio / timers: Add qemu_clock_get_ms and qemu_clock_get_ms
aio / timers: Remove legacy qemu_clock_deadline & qemu_timerlist_deadline
aio / timers: Remove alarm timers
aio / timers: Add documentation and new format calls
aio / timers: Use all timerlists in icount warp calculations
aio / timers: Introduce new API timer_new and friends
aio / timers: On timer modification, qemu_notify or aio_notify
aio / timers: Convert mainloop to use timeout
aio / timers: Convert aio_poll to use AioContext timers' deadline
...
Message-id: 1377202298-22896-1-git-send-email-stefanha@redhat.com
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
This is an autogenerated patch using scripts/switch-timer-api.
Switch the entire code base to using the new timer API.
Note this patch may introduce some line length issues.
Signed-off-by: Alex Bligh <alex@alex.org.uk>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Convert stderr messages calling error_get_pretty()
to error_report().
Timestamp is prepended by -msg timstamp option with it.
Per Markus's comment below, A conversion from fprintf() to
error_report() is always an improvement, regardless of
error_get_pretty().
http://marc.info/?l=qemu-devel&m=137513283408601&w=2
But, it is not reasonable to convert them at one time
because fprintf() is used everwhere in qemu.
So, it should be done step by step with avoiding regression.
Signed-off-by: Seiji Aguchi <seiji.aguchi@hds.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
Let's avoid -cpu host barfing at this PVR.
Linux recognizes it as "POWER5+ (gs) v2.1".
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Message-id: 1375321323-29954-5-git-send-email-afaerber@suse.de
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
It is ISA 2.03. Modelled as 970FX minus AltiVec flag.
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Message-id: 1375321323-29954-4-git-send-email-afaerber@suse.de
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Message-id: 1375321323-29954-3-git-send-email-afaerber@suse.de
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Message-id: 1375321323-29954-2-git-send-email-afaerber@suse.de
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Commit 03a15a5436 claimed to add a POWER7+
model but instead added a "POWER7P" model, with an unhelpful "POWER7P"
description on top. Fix this to "POWER7+" as we already have "POWER3+",
"POWER4+" and "POWER5+" and there being no reason to deviate with the
user-visible command line -cpu POWER7P from the marketing name POWER7+.
Further, don't needlessly deviate from the scheme of naming PVR constant,
QOM type and device description after the exact revision that is in fact
encoded in the PVR used.
That way, we can change the user-friendly alias -cpu POWER7+ to point to a
different revision if we so desire, while not polluting the type namespace.
This naming scheme is sensible and completely orthogonal to how PVRs may
or may not get matched to CPU types.
Cc: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Message-id: 1375736387-8429-1-git-send-email-afaerber@suse.de
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
This patch adds CPU PVR definition for POWER7+.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-id: 1375412374-24701-1-git-send-email-aik@ozlabs.ru
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Message-id: 1375106733-832-2-git-send-email-afaerber@suse.de
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
'dprintf' is the name of a POSIX standard function so we should not be
stealing it for our debug macro. Rename to 'DPRINTF' (in line with
a number of other source files.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
Acked-by: Richard Henderson <rth@twiddle.net>
Acked-by: Kevin Wolf <kwolf@redhat.com>
Message-id: 1375100199-13934-4-git-send-email-peter.maydell@linaro.org
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
At present, the savevm / migration support for the pseries machine will not
work when KVM is enabled. That's because KVM manages the guest's hash page
table in the host kernel, so qemu has no visibility of it. This patch
fixes this by using new kernel interfaces to extract and reinsert the
guest's hash table during the migration process.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Message-id: 1374175984-8930-11-git-send-email-aliguori@us.ibm.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Model TCE tables as a device that's hooked up as a child object to
the owner. Besides the code cleanup, we get a few nice benefits:
1) free actually works now (it was dead code before)
2) the TCE information is visible in the device tree
3) we can expose table information as properties such that if we
change the window_size, we can use globals to keep migration
working.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-id: 1374175984-8930-6-git-send-email-aliguori@us.ibm.com
[dwg: pseries: savevm support for PAPR TCE tables]
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
[alexey: ppc kvm: fix to compile]
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
The savevm code for the powerpc cpu emulation is currently based around
the old register_savevm() rather than register_vmstate() method. It's also
rather broken, missing some important state on some CPU models.
This patch completely rewrites the savevm for target-ppc, using the new
VMStateDescription approach. Exactly what needs to be saved in what
configurations has been more carefully examined, too. This introduces a
new version (5) of the cpu save format. The old load function is retained
to support version 4 images.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-id: 1374175984-8930-2-git-send-email-aliguori@us.ibm.com
[aik: ppc cpu savevm convertion fixed to use PowerPCCPU instead of CPUPPCState]
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Commit c643bed99 moved qemu_init_vcpu() calls to common CPUState code.
This causes x86 cpu-add to fail with "KVM: setting VAPIC address failed".
The reason for the failure is that CPUClass::kvm_fd is not yet
initialized in the following call graph:
->x86_cpu_realizefn
->x86_cpu_apic_realize
->qdev_init
->device_set_realized
->device_reset (hotplugged == 1)
->apic_reset_common
->vapic_base_update
->kvm_apic_vapic_base_update
This causes attempted KVM vCPU ioctls to fail.
By contrast, in the non-hotplug case the APIC is reset much later, when
the vCPU is already initialized.
As a quick and safe solution, move the qemu_init_vcpu() call back into
the targets' realize functions.
Reported-by: Chen Fan <chen.fan.fnst@cn.fujitsu.com>
Acked-by: Igor Mammedov <imammedo@redhat.com> (for i386)
Tested-by: Jia Liu <proljc@gmail.com> (for openrisc)
Signed-off-by: Andreas Färber <afaerber@suse.de>
Replace the GDB_CORE_XML define in gdbstub.c with a CPUClass field.
Use first_cpu for qSupported and qXfer:features:read: for now.
Add a stub for xml_builtin.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Completes migration of target-specific code to new target-*/gdbstub.c.
Acked-by: Michael Walle <michael@walle.cc> (for lm32)
Acked-by: Max Filippov <jcmvbkbc@gmail.com> (for xtensa)
Signed-off-by: Andreas Färber <afaerber@suse.de>
This avoids polluting the global namespace with a non-prefixed macro and
makes it obvious in the call sites that we return.
Semi-automatic conversion using, e.g.,
sed -i 's/GET_REGL(/return gdb_get_regl(mem_buf, /g' target-*/gdbstub.c
followed by manual tweaking for sparc's GET_REGA() and Coding Style.
Acked-by: Michael Walle <michael@walle.cc> (for lm32)
Acked-by: Max Filippov <jcmvbkbc@gmail.com> (for xtensa)
Signed-off-by: Andreas Färber <afaerber@suse.de>
CPUState::gdb_num_regs replaces num_g_regs.
CPUClass::gdb_num_core_regs replaces NUM_CORE_REGS.
Allows building gdb_register_coprocessor() for xtensa, too.
As a side effect this should fix coprocessor register numbering for SMP.
Acked-by: Michael Walle <michael@walle.cc> (for lm32)
Acked-by: Max Filippov <jcmvbkbc@gmail.com> (for xtensa)
Signed-off-by: Andreas Färber <afaerber@suse.de>
Change breakpoint_invalidate() argument to CPUState alongside.
Since all targets now assign a softmmu-only field, we can drop helpers
cpu_class_set_{do_unassigned_access,vmsd}() and device_class_set_vmsd().
Prepares for changing cpu_memory_rw_debug() argument to CPUState.
Acked-by: Max Filippov <jcmvbkbc@gmail.com> (for xtensa)
Signed-off-by: Andreas Färber <afaerber@suse.de>
Prepares for changing cpu_single_step() argument to CPUState.
Acked-by: Michael Walle <michael@walle.cc> (for lm32)
Signed-off-by: Andreas Färber <afaerber@suse.de>
Where no extra implementation is needed, fall back to CPUClass::set_pc().
Acked-by: Michael Walle <michael@walle.cc> (for lm32)
Signed-off-by: Andreas Färber <afaerber@suse.de>
This moves setting the Program Counter from gdbstub into target code.
Use vaddr type as upper-bound replacement for target_ulong.
Signed-off-by: Andreas Färber <afaerber@suse.de>
This patch adds CPU PVR definition for POWER8,
and enables QEMU to launch guests on POWER8 hardware.
Signed-off-by: Prerna Saxena <prerna@linux.vnet.ibm.com>
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Paul Mackerras <paulus@samba.org>
Reviewed-by: Andreas Farber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
MPC86xx processors are based on the e600 core, which is not the case
in qemu where it is based on the 7400 processor.
This patch creates the e600 core and instantiates the MPC86xx
processors based on it. Therefore, adding the high BATs, the SPRG
4..7 registers, which are e600-specific [1], and a HW MMU model (as 7400).
This allows to define the MPC8610 processor too.
Tested with a kernel using the HW TLB misses.
[1] http://cache.freescale.com/files/32bit/doc/ref_manual/E600CORERM.pdf
Signed-off-by: Julio Guerra <guerr@julio.in>
Signed-off-by: Alexander Graf <agraf@suse.de>
x86 was using additional CPU_DUMP_* flags, so make that configurable in
CPUClass::reset_dump_flags.
This adds reset logging for alpha, unicore32 and xtensa.
Acked-by: Michael Walle <michael@walle.cc> (for lm32)
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Choose CPUState rather than PowerPCCPU since doing a CPU() cast on the
macro argument would hide type mismatches.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Since commit 878096eeb2 (cpu: Turn
cpu_dump_{state,statistics}() into CPUState hooks) CPUArchState is no
longer needed.
Add documentation and make the functions available through qemu/log.h
outside NEED_CPU_H to allow use in qom/cpu.c. Moving them to qom/cpu.h
was not yet possible due to convoluted include paths, so that some
devices grow an implicit and unneeded dependency on qom/cpu.h for now.
Acked-by: Michael Walle <michael@walle.cc> (for lm32)
Reviewed-by: Richard Henderson <rth@twiddle.net>
[AF: Simplified mb_cpu_do_interrupt() and do_interrupt_all() changes]
Signed-off-by: Andreas Färber <afaerber@suse.de>
Also use bool type while at it.
Prepares for moving singlestep_enabled field to CPUState.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Commit b632a148b6 (target-ppc: QOM method
dispatch for MMU fault handling) introduced a use of ENV_GET_CPU()
inside target-ppc/ code. Use ppc_env_get_cpu() instead.
Purely cosmetic, non-functional change to aid in locating and removing
ENV_GET_CPU() usages.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Move next_cpu from CPU_COMMON to CPUState.
Move first_cpu variable to qom/cpu.h.
gdbstub needs to use CPUState::env_ptr for now.
cpu_copy() no longer needs to save and restore cpu_next.
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
[AF: Rebased, simplified cpu_copy()]
Signed-off-by: Andreas Färber <afaerber@suse.de>
A transition from CPUFooState to FooCPU can be considered safe,
just like FooCPU::env access in the opposite direction.
The only benefit of the FOO_CPU() casts would be protection against
bogus CPUFooState pointers, but then surrounding code would likely
break, too.
This should slightly improve interrupt etc. performance when going from
CPUFooState to FooCPU.
For any additional CPU() casts see 3556c233d9
(qom: allow turning cast debugging off).
Reported-by: Anthony Liguori <aliguori@us.ibm.com>
Acked-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andreas Färber <afaerber@suse.de>
The functions cpu_clone_regs() and cpu_set_tls() are not purely CPU
related -- they are specific to the TLS ABI for a a particular OS.
Move them into the linux-user/ tree where they belong.
target-lm32 had entirely unused implementations, since it has no
linux-user target; just drop them.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andreas Färber <afaerber@suse.de>
The L2CR register contains a number of bits that either impose configuration
which we can't deal with or mean "something is in progress until the bit is
0 again".
Since we don't model the former and we do want to accomodate guests using the
latter semantics, let's just ignore writes to L2CR. That way guests always read
back 0 and are usually happy with that.
Signed-off-by: Alexander Graf <agraf@suse.de>
When running QEMU with "-cpu ?" we walk through every alias for every
target CPU we know about. This takes several seconds on my very fast
host system.
Let's introduce a class object cache in the alias table. Using that we
don't have to go through the tedious work of finding our target class.
Instead, we can just go directly from the alias name to the target class
pointer.
This patch brings -cpu "?" to reasonable times again.
Before:
real 0m4.716s
After:
real 0m0.025s
Signed-off-by: Alexander Graf <agraf@suse.de>
On PPC 6xx, data and code have separated TLBs. Until now QEMU was only
looking at data TLBs, which is not good when GDB wants to read code.
This patch adds a second call to get_physical_address() with an
ACCESS_CODE type of access when the first call with ACCESS_INT fails.
Signed-off-by: Fabien Chouteau <chouteau@adacore.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
"(qemu) info tlb" is a very useful tool for debugging, so I implemented
the missing 6xx version.
Signed-off-by: Fabien Chouteau <chouteau@adacore.com>
[agraf: fix printfs on hwaddr to PRI]
Signed-off-by: Alexander Graf <agraf@suse.de>
Use it to clean up the opcode table, resolving a former TODO from Jocelyn.
Also switch from malloc() to g_malloc().
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
This adds a missing code to save CR (condition register) via
kvm_arch_put_registers(). kvm_arch_get_registers() already has it.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
IABR SPR is already registered in gen_spr_603(), called from init_proc_603E().
Signed-off-by: Hervé Poussineau <hpoussin@reactos.org>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Previous code has #define POWERPC_INSNS2_<family> PPC_NONE in some
places for macrofied assignment to insns_flags2 field.
PPC_NONE is defined as zero though and QOM classes are zero-initialized,
so drop any pcc->insns_flags2 = PPC_NONE; assignments.
PPC_NONE itself is still in use in translate.c.
Suggested-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Enables support for the in-kernel MPIC that thas been merged into the
KVM next branch. This includes irqfd/KVM_IRQ_LINE support from Alex
Graf (along with some other improvements).
Note from Alex regarding kvm_irqchip_create():
On x86, one would call kvm_irqchip_create() to initialize an
in-kernel interrupt controller. That function then goes ahead and
initializes global capability variables as well as the default irq
routing table.
On ppc, we can't call kvm_irqchip_create() because we can have
different types of interrupt controllers. So we want to do all the
things that function would do for us in the in-kernel device init
handler.
Signed-off-by: Scott Wood <scottwood@freescale.com>
[agraf: squash in kvm_irqchip_commit_routes patch, fix non-kvm build,
fix ppcemb]
Signed-off-by: Alexander Graf <agraf@suse.de>
There are cases where a kvm provided function is called from generic
hw code that doesn't know whether kvm is available or not. Provide
a stub file which can provide simple replacement functions for those
cases.
Signed-off-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
This allows to move the call into CPUState's realizefn.
Therefore move the stub into libqemustub.a.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Make cpustats monitor command available unconditionally.
Prepares for changing kvm_handle_internal_error() and kvm_cpu_exec()
arguments to CPUState.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Change Monitor::mon_cpu to CPUState as well.
Reviewed-by: liguang <lig.fnst@cn.fujitsu.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
The common KVM code insists on calling kvm_arch_init_irq_routing()
as soon as it sees kernel header support for it (regardless of whether
QEMU supports it). Provide a dummy function to satisfy this.
Unlike x86, PPC does not have one default irqchip, so there's no common
code that we'd stick here. Even if you ignore the routes themselves,
which even on x86 are not set up in this function, the initial XICS
kernel implementation will not support IRQ routing, so it's best to
leave even the general feature flags up to the specific irqchip code.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Some source files #include the same header more than
once for no good reason. Remove second #includes in
such cases.
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
When running an L=1 cmp instruction on a 64bit PPC CPU with SF off, it
still behaves identical to what it does when SF is on. Remove the implicit
difference in the code.
Also, on most 32bit CPUs we should always treat the compare as 32bit
compare, as the CPU will ignore the L bit. This is not true for e500mc,
but that's up for a different patch.
Reported-by: Torbjorn Granlund <tg@gmplib.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
The implementation for rldcl tried to always fetch its
parameters from the opcode, even though the opcode was
already passed in in decoded and different forms.
Use the parameters instead, fixing rldcl.
Reported-by: Torbjorn Granlund <tg@gmplib.org>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
Recent Linux kernels save and restore the PPR across exceptions
so we need to handle it.
Signed-off-by: Anton Blanchard <anton@au1.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Invalid and privileged SPR warnings currently print the wrong
address. While fixing that, also make it clear that we are
printing both the decimal and hexadecimal SPR number.
Before:
Trying to read invalid spr 896 380 at 0000000000000714
After:
Trying to read invalid spr 896 (0x380) at 0000000000000710
Signed-off-by: Anton Blanchard <anton@au1.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
When running -cpu on a POWER7 system with PR KVM, we mask out the 1TB
MMU capability from the MMU type mask, but not the AMR bit.
This leads to us having a new MMU type that we don't check for in our
MMU management functions.
Add the new type, so that we don't have to worry about breakage there.
We're not going to use the TCG MMU management in that case anyway.
The long term fix for this will be to move all these MMU management
functions to class callbacks.
Signed-off-by: Alexander Graf <agraf@suse.de>
Power ISA 2.05 adds support for extended mtfsf/mtfsfi form, with a new
W field to select the upper part of the FPCSR register.
For that the helper is changed to handle 64-bit input values and mask with
up to 16 bits. The mtfsf/mtfsfi instructions do not have the W bit
marked as invalid anymore. Instead this is checked in the helper, which
therefore needs to access to the insns/insns_flags2. They are added in
the DisasContext struct. Finally change all accesses to the opcode fields
through extract helpers, prefixed with FP for consistency.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
Needed for Power ISA version 2.05 compliance. The check for odd register
pairs is done using the invalid bits.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
Needed for Power ISA version 2.05 compliance. The check for odd register
pairs is done using the invalid bits.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
Needed for Power ISA version 2.05 compliance.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
[agraf: fix tcg debug error]
Signed-off-by: Alexander Graf <agraf@suse.de>
Needed for Power ISA version 2.05 compliance.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
Needed for Power ISA version 2.05 compliance.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
[agraf: fix 32-bit host compile, simplify code]
Signed-off-by: Alexander Graf <agraf@suse.de>
Needed for Power ISA version 2.05 compliance.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
.. and enable it on POWER7 CPU.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
fabs, fnabs and fneg are just flipping the bit sign of an FP register,
this can be implemented in TCG instead of using softfloat.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
The default with linux-user for dcbz on 970 is to emulate 32 byte clears.
However, redoing the dcbzl support we added a check to not honor the bit
in HID5 that sets this.
Remove the #ifdef check on linux user, so that we get 32 byte clears again.
Reported-by: Riku Voipio <riku.voipio@iki.fi>
Signed-off-by: Alexander Graf <agraf@suse.de>
Raise the exception on the first occurence, do not wait for the next
floating point operation.
Signed-off-by: Fabien Chouteau <chouteau@adacore.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
For PAPR guests, KVM tracks the various areas registered with the
H_REGISTER_VPA hypercall. For full emulation, of course, these are tracked
within qemu. At present these values are not synchronized. This is a
problem for reset (qemu's reset of the VPA address is not pushed to KVM)
and will also be a problem for savevm / migration.
The kernel now supports accessing the VPA state via the ONE_REG interface,
this patch adds code to qemu to use that interface to keep the qemu and
KVM ideas of the VPA state synchronized.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
In addition to the performance monitor registers found on nearly all
6xx chips, the POWER7 has two additional counters (PMC5 & PMC6) and an
extra control register (MMCRA). This patch adds stub support for them to
qemu - the registers won't do anything, but with this change won't cause
illegal instruction traps accessing them. They're also registered with
their ONE_REG ids, so their value will be kept in sync with KVM where
appropriate.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
PAPR requires that the device tree's CPU nodes have several properties
with information about the L1 cache. We already create two of these
properties, but with incorrect names - "[id]cache-block-size" instead
of "[id]-cache-block-size" (note the extra hyphen).
We were also missing some of the required cache properties. This
patch adds the [id]-cache-line-size properties (which have the same
values as the block size properties in all current cases). We also
add the [id]-cache-size properties.
Adding the cache sizes requires some extra infrastructure in the
general target-ppc code to (optionally) set the cache sizes for
various CPUs. The CPU family descriptions in translate_init.c can set
these sizes - this patch adds correct information for POWER7, I'm
leaving other CPU types to people who have a physical example to
verify against. In addition, for -cpu host we take the values
advertised by the host (if available) and use those to override the
information based on PVR.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
For the pseries machine, we need to advertise to the guest the size of its
RMA - that is the amount of memory it can access with the MMU off. For HV
KVM, this is constrained by the hardware limitations on the virtual RMA of
one hash PTE per PTE group in the hash page table. We already had code to
calculate this, but it was assuming the VRMA page size was the same as the
(host) backing page size for guest RAM.
In the case of a host kernel configured for 64k base page size, but running
on hardware (or firmware) which only allows 4k pages, the hose will do all
its allocations with a 64k page size, but still use 4k hardware pages for
actual mappings. Usually that's transparent to things running under the
host, but in the case of the maximum VRMA size it's not.
This patch refines the RMA size calculation to instead use the largest
available hardware page size (as reported by the SMMU_INFO call) which is
less than or equal to the backing page size. This now gives the correct
RMA size in all cases I've tested.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Enable the KVM emulated watchdog if KVM supports (use the
capability enablement in watchdog handler). Also watchdog exit
(KVM_EXIT_WATCHDOG) handling is added.
Watchdog state machine is cleared whenever VM state changes to running.
This is to handle the cases like return from debug halt etc.
Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
[agraf: rebase to current code base, fix non-kvm cases]
Signed-off-by: Alexander Graf <agraf@suse.de>
Broken in b5a73f8d8a, the carry itself was
fixed in 79482e5ab3. But we still need to
produce the full 64-bit addition.
Simplify the conditions at the top of the functions for when we need a
new temporary. Only plain addition is important enough to warrent avoiding
the temporary, and the extra tcg move op that would come with it.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Tested-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
According to the different user's manuals, the vector offset for system
reset (both /HRESET and /SRESET) is 0x00100.
This patch may break support of some executables, as the power-on start
address may change. For a specific board, if the power-on start address
is different than HRESET vector (i.e. 0x00000100 or 0xfff00100), this
should be fixed in board's initialization code.
Signed-off-by: Fabien Chouteau <chouteau@adacore.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
The overflow computation of nego and subf*o instructions has been broken
in commit ffe30937. Contrary to other targets, the instruction is subtract
from an not subtract on PowerPC.
This patch fixes the issue by using the correct argument in the xor
computation. Thanks to Peter Maydell for the hint.
With this change the PPC emulation passes the Gwenole Beauchesne
testsuite again.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
This value is not needed if we use correctly the MSR[IP] bit.
excp_prefix is always 0x00000000, except when the MSR[IP] bit is
implemented and set to 1, in that case excp_prefix is 0xfff00000.
The handling of MSR[IP] was already implemented but not used at reset
because the value of env->msr was changed "manually".
The patch uses the function hreg_store_msr() to set env->msr, this
ensures a good handling of MSR[IP] at reset, and therefore a good value
for excp_prefix.
Signed-off-by: Fabien Chouteau <chouteau@adacore.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Older KVM versions don't support EPR which breaks guests when we announce
MPIC variants that support EPR.
Catch that case and expose only MPIC version 2.0 which tells the guest that
we don't support the EPR capability yet.
Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>
[agraf: Add comment, route cap check through kvm_ppc.c]
Signed-off-by: Alexander Graf <agraf@suse.de>
ISEL is a Power ISA 2.06 instruction and thus is available on POWER7.
Given this is trapped and emulated by the Linux kernel, I guess it went
unnoticed.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
Many of these should be cleaned up with proper qdev-/QOM-ification.
Right now there are many catch-all headers in include/hw/ARCH depending
on cpu.h, and this makes it necessary to compile these files per-target.
However, fixing this does not belong in these patches.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The set of computations used in b5a73f8d8a
are only valid if the current word size == target_long size. This failed
to take ppc64 in 32-bit (narrow) mode into account.
Add a NARROW_MODE macro to avoid conditional compilation.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
After previous cleanups, the many scattered checks of env->mmu_model in
the ppc MMU implementation have, at least for "classic" hash MMUs been
reduced (almost) to a single switch at the top of
cpu_ppc_handle_mmu_fault().
An explicit switch is still a pretty ugly way of handling this though. Now
that Andreas Färber's CPU QOM cleanups for ppc have gone in, it's quite
straightforward to instead make the handle_mmu_fault function a QOM method
on the CPU object.
This patch implements such a scheme, initializing the method pointer at
the same time as the mmu_model variable. We need to keep the latter around
for now, because of the MMU types (BookE, 4xx, et al) which haven't been
converted to the new scheme yet, and also for a few other uses. It would
be good to clean those up eventually.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
For softmmu builds the interface from the generic code to the target
specific MMU implementation is through the tlb_fill() function. For ppc
this is currently in mem_helper.c, whereas it would make more sense in
mmu_helper.c. This patch moves it, which also allows
cpu_ppc_handle_mmu_fault() to become a local function in mmu_helper.c
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
mmu_helper.c is, for obvious reasons, almost entirely concerned with
softmmu builds of qemu. However, it does contain one stub function which
is used when CONFIG_USER_ONLY=y - the user only versoin of
cpu_ppc_handle_mmu_fault, which always triggers an exception. The entire
rest of the file is surrounded by #if !defined(CONFIG_USER_ONLY).
We clean this up by moving the user only stub into its own new file,
removing the ifdefs and building mmu_helper.c only when CONFIG_SOFTMMU
is set. This also lets us remove the #define of cpu_handle_mmu_fault to
cpu_ppc_handle_mmu_fault - that name is only used from generic code for
user only - so we just name our split user version by the generic name.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Version 2.06 of the Power architecture describes an additional page
protection mechanism. Each virtual page has a "class" (0-31) recorded in
the PTE. The AMR register contains bits which can prohibit reads and/or
writes on a class by class basis. Interestingly, the AMR is userspace
readable and writable, however user mode writes are masked by the contents
of the UAMOR which is privileged.
This patch implements this protection mechanism, along with the AMR and
UAMOR SPRs. The architecture also specifies a hypervisor-privileged AMOR
register which masks user and supervisor writes to the AMR and UAMOR. We
leave this out for now, since we don't at present model hypervisor mode
correctly in any case.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[agraf: fix 32-bit hosts]
Signed-off-by: Alexander Graf <agraf@suse.de>
ppc_hash{32,64}_handle_mmu_fault() is now the only caller of
ppc_hash{32,64{_translate(), so this patch combines them together. This
means that instead of one returning a variety of non-obvious error codes
which then get translated into the various mmu exception conditions, we can
just generate the exceptions as we discover problems in the translation
path. This also removes the last usage of mmu_ctx_hash{32,64}.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Currently the hash mmu versionsof get_phys_page_debug() use the same
ppc64_hash64_translate() function to do the translation logic as the normal
mm fault handler code.
That sounds like a good idea, but has some complications. The debug path
doesn't need, or even want some parts of the full translation path, like
permissions checking. Furthermore, the pte flags update included in the
normal path means that the debug call is not quite side effect free.
This patch, therefore, reimplements get_phys_page_debug as the minimal
required subset of the full translation path.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>`z
Signed-off-by: Alexander Graf <agraf@suse.de>
BEHAVIOUR CHANGE
At present we take the whole of word 1 of the hash PTE as the real page
number used to calculate the translated address. This is incorrect,
because it leaves the flags from the low bits of PTE word 1 in place in the
rpm. We mostly get away with that because the value is later masked by
TARGET_PAGE_MASK.
More recent 64-bit CPUs also have a small number of flag bits (PP0 and
KEY) in the top bits of PTE word 1. Any guest which used those bits would
fail with the current code.
This patch fixes the problem by correctly masking out the RPN field of
PTE word 1. This is safe, even for older CPUs which didn't have PP0 and
KEY, because although the RPN notionally extended to the very top of PTE
word 1, none of those CPUs actually implemented that many real address
bits.
We add analogous masking to the 32-bit code, even though it also doesn't
have the high flag bits, for consistency and clarity.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
More recent 64-bit hash MMUs support multiple page sizes, and PTEs for
large pages only include the offset of the whole large page. But the qemu
tlb only handles pages of the base size (4k) so we need to break up the
large pages into 4k pieces for the qemu tlb. To do that we have a somewhat
awkward piece of code that adds the folds address bits 4k and the page size
from the virtual address into the real address from the pte.
This patch simplifies this redefining the raddr output of
ppc_hash64_translate() to be the full real address of the faulting address,
rather than just the (4k) page offset. Computing that turns out to be
simpler, and is fine for the caller, since it already masks with
TARGET_PAGE_MASK before inserting into the qemu tlb.
The multiple page size complication doesn't exist for 32-bit hash mmus, but
we make an analogous cleanup there for consistency.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Currently the ppc_hash{32,64}_pte_update_flags() helper functions update a
PTE's referenced and changed bits as necessary to reflect the access. It
is somewhat long winded, though. This patch open codes them in their
(single) callers, in a simpler way.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
BEHAVIOUR CHANGE
Currently, for 64-bit hash mmu, the execute protection bit placed into the
qemu tlb is based only on the N (No execute) bit from the PTE. However,
No Execute can also be set at the segment level. We do check this on
execute faults, but this still means we could incorrectly allow execution
of code from a No Execute segment, if a prior read or write fault caused
the page to be loaded into the qemu tlb with PROT_EXEC set.
To correct this, we (re-)check the segment level no execute permission when
generating the protection bits for the qemu tlb.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Currently checking of PTE permission bits is split messily amongst
ppc_hash{32,64}_pp_check(), ppc_hash{32,64}_check_prot() and their callers.
This patch cleans this up to have the new function
ppc_hash{32,64}_pte_prot() compute the page permissions from the SLBE (for
64-bit) or segment register (32-bit) and the pte. A greatly simplified
version of the actual permissions check is then open coded in the callers.
The 32-bit version of ppc_hash32_pte_prot() is implemented in terms of
ppc_hash32_pp_prot(), a renamed and slightly cleaned up version of the old
ppc_hash32_pp_check(), which is also used for checking BAT permissions on
the 601.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Previous cleanups have meant the nx field of the mmu_ctx_hash32 structure
is now only used within ppc_hash32_translate(), and so it can be replaced
by a local variable.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
BEHAVIOUR CHANGE
Currently if ppc_hash{32,64}_translate() finds a PTE matching the given
virtual address, it will always update the PTE's R & C (Referenced and
Changed) bits. This happens even if the PTE's permissions mean we are
about to deny the translation.
This is clearly a bug, although we get away with it because:
a) It will only incorrectly set, never reset the bits, which should not
cause guest correctness problems.
b) Linux guests never use the R & C bits anyway.
This patch fixes the behaviour, only updating R & C when access is granted
by the PTE.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
BEHAVIOUR CHANGE
Currently, on any failure translating an address with BATs, we proceed to
normal segment and page table translation. That's incorrect if the
BAT error was due to permissions, rather than not finding a matching BAT.
We've gotten away with it because a guest would not usually put
translations for the same address in both BATs and page table. Nonetheless
this patch corrects the logic, only doing page table lookup if no BAT
is found. A matching BAT with bad permissions will now correctly trigger
an exception.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch makes a general cleanup of the ppc_hash32_get_bat() function,
renaming it to ppc_hash32_bat_lookup(). In particular, the new function
only looks for a matching BAT, with the permissions check from the old
function moved to the caller.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
The code to search for a matching BAT for a virtual address is somewhat
longwinded and awkward. In particular, it relies on seperate size and
validity information being returned from the hash32_bat_size() function
(and 601 specific variant).
We simplify this by having hash32_bat_size() return instead a mask of the
virtual address bits to match, and 0 for invalid (since a BAT can never
match the entire address space).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
hash32_bat_size_prot() and its 601 variant, as the name suggests, returns
both a BAT's size - needed to search for a matching BAT - and its
permissions, only relevant once a matching BAT has been located.
There's no particular advantage to combining these, so we split these roles
into seperate functions for clarity.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
In the code for handling BATs, the hash32_bat_size_prot() and
hash32_bat_601_size_prot() functions are passed the BAT contents by
reference (pointer) for no clear reason, since they only need the values
within.
This patch removes this odd usage, and uses the resulting change to clean
up the caller slightly.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
With previous cleanups made, the 32-bit and 64-bit pte_check*() functions
are pretty trivial and only have one call site. This patch therefore
clarifies the overall code flow by folding those functions into their
call site.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch makes a general cleanup of the address mangling logic in
ppc_hash64_htab_lookup(). In particular it now avoids repeatedly switching
on the segment size. The lack of SLB and multiple segment sizes on 32-bit
means an analogous cleanup is not needed there.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
find_pte{32,64}() are poorly named, since they both find a PTE and do
permissions checking of it. This patch makes them only locate a matching
PTE, moving the permission checking and other logic to the caller. We
rename the resulting search functions ppc_hash{32,64}_htab_lookup().
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
find_pte{32,64}() are not particularly well named. They only "find" a PTE
within a given PTE group, and they also do permissions checking and other
things.
This patch makes it somewhat close to matching the name, by folding the
search of both primary and secondary hash bucket into it, along with the
various address bit shuffling to determine the right hash buckets.
In the 32-bit case we also remove the code for splitting large pages into
4k pieces for the qemu tlb, since no 32-bit hash MMUs support multiple page
sizes.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
find_pte{32,64{() do several things. First they search through a PTEG
ooking for a PTE matching our virtual address. Then they do permissions
checking and other processing on that PTE.
This patch separates the search by VA out from the rest. The search is
combined with the pte{32,64}_match() functions into new
ppc_has{32,64}_pteg_search() functions.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
BEHAVIOUR CHANGE
The ppc hash mmu hashes each virtual address to a primary and secondary
possible hash bucket (aka PTE group or PTEG) each with 8 PTEs. Then we
need a linear search through the PTEs to find the correct one for the
virtual address we're translating.
It is a programming error for the guest to insert multiple PTEs mapping the
same virtual address into a PTEG - in this case the ppc architecture says
the MMU can either act as if just one was present, or give a machine check.
Currently our code takes the first matching PTE in a PTEG if it finds a
successful translation. But if a matching PTE is found, but permission
bits don't allow the access, we keep looking through the PTEG, checking
that any other matching PTEs contain an identical translation.
That behaviour is perhaps not exactly wrong, but it's certainly not useful.
This patch changes it to always just find the first matching PTE in a PTEG.
In addition, if we get a permissions problem on the primary PTEG, we then
search the secondary PTEG. This is incorrect - a permission denying PTE
in the primary PTEG should not be overwritten by an access granting PTE in
the secondary (although again, it would be a programming error for the
guest to set up such a situation anyway). So additionally we update the
code to only search the secondary PTEG if no matching PTE is found in the
primary at all.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
On the ppc hash mmus, no-execute can be set at the segment level (on more
recent 64-bit hash mmus it can also be set at the page level). This patch
separates out this check to make it clearer what is going on, and avoiding
excessive indentation of the remaining translation code.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
This further separates the unusual case handling of direct store segments
from the main translation path by moving its logic into a helper function,
with some tiny cleanups along the way.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
At present a large chunk of ppc_hash32_translate() is taken up with an
ugly if selecting between direct store segments (hardly ever used) and
normal paged segments. This patch clarifies the flow of code by
handling direct store segments immediately then returning, leaving the
straight line code to describe the normal MMU path.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
After previous work, ppc_hash{32,64}_get_physical_address() are almost
trivial wrappers around get_segment{32,64}() which does nearly all the work of
translating an address according to the hash mmu model. Therefore combine the
two functions into one, under the better name of
ppc_hash{32,64}_translate().
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
The eaddr field of mmu_ctx_hash{32,64} is effectively just used to pass the
effective address from get_segment{32,64}() to find_pte{32,64}(). Just
pass it as a normal parameter instead.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
The nx field in mmu_ctx_hash64 is used in two different functions. But its
used for slightly different things in each place, and the value is never
propagated between them. In other words, it might as well be two local
variables. This patch makes it so.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
In ppc env->access_type is updated by e.g. integer load/stores with
ACCESS_INT floating point load/stores with ACCESS_FLOAT and so forth. In
hash mmu fault paths it can also b set to ACCESS_CODE for instruction
fetch accesses.
But the only place which uses anything more of the access_type than
whether it is instruction fetch or data access is the direct store segment
handling. Instruction versus data access can be more simply determined
from the rw value passed down from the top.
This changes the code to use rw in preference to checking access_type.
For the 32-bit case there is a small amount of code (for direct store
segments) that still needs the full access type. Instead of passing it
all the way down the stack, we retrieve it from the env structure, which
is where it came anyway, before this patch.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
On real hardware the ppc hash page table is stored in memory; accordingly
our mmu emulation code can read a hash page table in guest memory. But,
when paravirtualized under PAPR, the real hash page table is in host
memory, accessible to the guest only via hypercalls. We model this by
also allowing the MMU emulation code to access a specially allocated hash
page table outside the guest's memory image. At present these two options
are implemented with some ugly conditionals at each access point in the mmu
emulation code. In the implementation of the PAPR hypercalls, we assume
the external hash table.
This patch cleans things up by adding helpers to load and store from the
hash table for both 32-bit and 64-bit hash mmus. The 64-bit versions
handle both the in-guest-memory and outside guest memory cases. The 32-bit
versions only handle the in-guest-memory case since no 32-bit systems can
have an external hash table at present.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Currently cpu.h contains a number of definitions relating to the 64-bit
hash MMU. Some are used in the MMU emulation code, but some are only used
in the spapr MMU management hcall implementations.
This patch moves these definitions (except for a few that are needed
more widely) into mmu-hash64.h header, shared between the MMU emulation
code and the spapr hcall code. The MMU emulation code is also updated to
actually use a number of those definitions in place of hard coded
constants.
Similarly, we add new analogous definitions to mmu-hash32.h and use those
in place of many hard-coded constants in mmu-hash32.c
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[agraf: fix 32-bit hosts]
Signed-off-by: Alexander Graf <agraf@suse.de>
mmu_ctx_t is currently defined in cpu.h. However it is used for temporary
information relating to mmu translation, and is only used in mmu_helper.c
and (now) mmu-hash{32,64}.c. Furthermore it contains information which
should be specific to particular MMU types. Therefore, move its definition
to mmu_helper.c. mmu-hash{32,64}.c are converted to use new data types
private to the relevant MMUs (identical to mmu_ctx_t for now, but that will
change in future patches).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
The functions for looking up BATs (Block Address Translation - essentially
a level 0 TLB) are shared between the classic 32-bit hash MMUs and the
6xx style software loaded TLB implementations.
This patch splits out a copy for the 32-bit hash MMUs, to facilitate
cleaning it up. The remaining version is left, but cleaned up slightly
to no longer deal with PowerPC 601 peculiarities (601 has a hash MMU).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
The get_pteg_offset() helper function is currently shared between 32-bit
and 64-bit hash mmus, taking a parameter for the hash pte size. In the
64-bit paths, it's only called in one place, and it's a trivial
calculation. This patch, therefore, open codes it for 64-bit. The
remaining version, which is used in two places is made 32-bit only and
moved to mmu-hash32.c.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
The newly separated paths for hash mmus rely on several helper functions
which are still shared with 32-bit hash mmus: pp_check(), check_prot() and
pte_update_flags(). While these don't have ugly ifdefs on the mmu type,
they're not very well thought out, so sharing them impedes cleaning up the
hash mmu paths. For now, put near-duplicate versions into mmu-hash64.c and
mmu-hash32.c, leaving the old version in mmu_helper.c for 6xx software
loaded tlb implementations. The hash 32 and software loaded
implementations are simplfied slightly, using the fact that no 32-bit CPUs
implement the 3rd page protection bit.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
cpu_get_phys_page_debug() is a trivial wrapper around
get_physical_address(). But even the signature of
get_physical_address() has some things we'd like to clean up on a
per-mmu basis, so this patch moves the test on mmu model out to
cpu_get_phys_page_debug(), moving the version for 64-bit hash MMUs out
to mmu-hash64.c and the version for 32-bit hash MMUs to mmu-hash32.c
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
cpu_ppc_handle_mmu_fault() calls get_physical_address() (whose behaviour
depends on MMU type) then, if that fails, issues an appropriate exception
- which again has a number of dependencies on MMU type.
This patch starts converting cpu_ppc_handle_mmu_fault() to have a
single switch on MMU type, calling MMU specific fault handler
functions which deal with both translation and exception delivery
appropriately for the MMU type. We convert 32-bit and 64-bit hash
MMUs to this new model, but the existing code is left in place for
other MMU types for now.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Depending on the MSR state, for 64-bit hash MMUs, get_physical_address
can either call check_physical (which has further tests for mmu type)
or get_segment64. Similarly for 32-bit hash MMUs we can either call
check_physucal or get_bat() and get_segment32().
This patch splits off the whole get_physical_addresss() path for hash
MMUs into 32-bit and 64-bit versions, handling real mode correctly for
such MMUs without going to check_physical and rechecking the mmu type.
Correspondingly, the hash MMU specific paths in check_physical() are
removed.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Currently get_physical_address() first checks to see if translation is
enabled in the MSR, then in the translation on case switches on the mmu
type. Except that for BookE MMUs, translation is always on, and so it
has to switch in the "translation off" case as well and do the same thing
as the translation on path for those MMUs. Plus, even translation off
doesn't behave exactly the same on the various MMU types so there are
further mmu type checks in the "translation off" path.
As a first step to cleaning this up, this patch moves the switch on mmu
type to the top level, then makes the translation on/off check just for
those mmu types where it is meaningful.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
The poorly named get_segment() function handles most of the address
translation logic for hash-based MMUs. It has many ugly conditionals on
whether the MMU is 32-bit or 64-bit.
This patch splits the function into 32 and 64-bit versions, using the
switch on mmu_type that's already in the caller
(get_physical_address()) to select the right one. Most of the
original function remains in mmu_helper.c to support the 6xx software
loaded TLB implementations (cleaning those up is a project for another
day).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
32-bit and 64-bit hash MMU implementations currently share a find_pte
function. This results in a whole bunch of ugly conditionals in the shared
function, and not all that much actually shared code.
This patch separates out the 32-bit and 64-bit versions, putting then
in mmu-hash64.c and mmu-has32.c, and removes the conditionals from
both versions.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Currently support for both 32-bit and 64-bit hash MMUs share an
implementation of pte_check. But there are enough differences that this
means the shared function has several very ugly conditionals on "is_64b".
This patch cleans things up by separating out the 64-bit version
(putting it into mmu-hash64.c) and the 32-bit hash version (putting it
in mmu-hash32.c). Another copy remains in mmu_helper.c, which is used
for the 6xx software loaded TLB paths.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
As a first step to disentangling the handling for 64-bit hash MMUs from
the rest, we move the code handling the Segment Lookaside Buffer (SLB)
(which only exists on 64-bit hash MMUs) into a new mmu-hash64.c file.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
One LOG_MMU statement in mmu_helper.c has an odd check on the effective
address being translated. I can see no reason for this; I suspect it was
a debugging hack from long ago. This patch removes it.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
This removes the never-used pte64_invalidate() function, and makes
ppcmas_tlb_check() static, since it's only used within that file.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
The PowerPC 620 was the very first 64-bit PowerPC implementation, but
hardly anyone ever actually used the chips. qemu notionally supports the
620, but since we don't actually have code to implement the segment table,
the support is broken (quite likely in other ways too).
This patch, therefore, removes all remaining pieces of 620 support, to
stop it cluttering up the platforms we actually care about. This includes
removing support for the ASR register, used only on segment table based
machines.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Although the support of this register may be uncomplete, there are no
reason to prevent the debugger from reading or writing it.
Signed-off-by: Fabien Chouteau <chouteau@adacore.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
target-ppc/kvm.c has an #ifdef on CONFIG_PSERIES, for the handling of
KVM exits due to a PAPR hypercall from the guest. However, since commit
e4c8b28cde "ppc: express FDT dependency of
pSeries and e500 boards via default-configs/", this hasn't worked properly.
That patch altered the configuration setup so that although CONFIG_PSERIES
is visible from the Makefiles, it is not visible from C files. This broke
the pseries machine when KVM is in use.
This patch makes a quick and dirty fix, by removing the CONFIG_PSERIES
dependency, replacing it with TARGET_PPC64 (since removing it entirely
leads to type mismatch errors). Technically this breaks the build when
configured with --disable-fdt, since that disables CONFIG_PSERIES on
TARGET_PPC64. However, it turns out the build was already broken in that
case, so this fixes pseries kvm without breaking anything extra. I'm
looking into how to fix that build breakage, but I don't think that need
delay applying this patch.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
This removes a global per-target function and thus takes us one step
closer to compiling multiple targets into one executable.
It will also allow to override the interrupt handling for certain CPU
families.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Move it to qom/cpu.h to avoid issues with include order.
Change pc_acpi_smi_interrupt() opaque to X86CPU.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Both fields are used in VMState, thus need to be moved together.
Explicitly zero them on reset since they were located before
breakpoints.
Pass PowerPCCPU to kvmppc_handle_halt().
Signed-off-by: Andreas Färber <afaerber@suse.de>
Move array of CPU aliases to cpu-models.c, alongside model definitions.
This requires to zero-terminate the aliases array since ARRAY_SIZE() can
no longer be used in translate_init.c then.
Suggested-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
The QMP query-cpu-definitions implementation iterated over CPU classes
only, which were getting less and less as aliases were extracted.
Keep them in QMP as valid -cpu arguments even if not guaranteed stable.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Revert adding a separate -cpu ? output section for aliases and list them
per CPU subclass.
Requested-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
This avoids assigning individual class fields and contributors
forgetting to add field assignments in KVM-only code.
ppc_cpu_class_find_by_pvr() requires the CPU model classes to be
registered, so defer host CPU type registration to kvm_arch_init().
Only register the host CPU type if there is a class with matching PVR.
This lets us drop error handling from instance_init.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
A victim of the d523dd00a7 AREG0
conversion, insert the missing cpu_env arguments.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Currently qemu does not get and put the state of the floating point and
vector registers to KVM. This is obviously a problem for savevm, as well
as possibly being problematic for debugging of FP-using guests.
This patch fixes this by using new extensions to the ONE_REG interface to
synchronize the qemu floating point state with KVM.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Currently when runing under KVM on ppc, we synchronize a certain number of
vital SPRs to KVM through the SET_SREGS call. This leaves out quite a lot
of important SPRs which are maintained in KVM. It would be helpful to
have their contents in qemu for debugging purposes, and when we implement
migration it will be vital, since they include important guest state that
will need to be restored on the target.
This patch sets up for synchronization of any registers supported by the
KVM ONE_REG calls. A new variant on spr_register() allows a ONE_REG id to
be stored with the SPR information. When we set/get information to KVM
we also synchronize any SPRs so registered.
For now we set this mechanism up to synchronize a handful of important
registers that already have ONE_REG IDs, notably the DAR and DSISR.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Let it resolve to v2.3 rather than v2.0.
Suggested-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Now that model definitions only reference their parent type, model
definitions are independent of the family definitions and can be
compiled independently of TCG translation.
Keep all #if defined(TODO) code local to cpu-models.c.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
This gets rid of some more overly long comments that have lost most of
their purpose now that in most cases there's only two functions left per
CPU family.
The class field is inherited by the actual CPU models, so override it.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Now POWERPC_DEF_SVR() no longer sets family-specific fields itself.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Don't attempt to suppress registration of CPU types, since the criteria
is actually a property of the class and should thus become a field.
Since we can't check a field set in a class_init function before
registering the type that leads to execution of that function, guard the
-cpu class lookup instead and suppress exposing these classes in -cpu ?
and in QMP.
In case someone tries to hot-add an incompatible CPU via device_add,
error out in realize.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Instead of assigning *_<family> constants, set .parent to a family type.
Introduce a POWERPC_FAMILY() macro to keep type registration close to
its implementation. This macro will need tweaking later.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Turn the array of model definitions into a set of self-registering QOM
types with their own class_init. Unique identifiers are obtained from
the combination of PVR, SVR and family identifiers; this requires all
alias #defines to be removed from the list. Possibly there are some more
left after this commit that are not currently being compiled.
Prepares for introducing abstract intermediate CPU types for families.
Keep the right-aligned macro line breaks within 78 chars to aid
three-way merges.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
We are about to drop the redundant name field along with ppc_def_t.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Drop the #if 0'ed alternative to make it "ppc64" for TARGET_PPC64.
If we ever want to change it, we can more easily do so now.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Move definitions that were 100% identical except for the name into a
list of aliases so that we don't register duplicate CPU types.
Drop the accompanying comments since they don't really add value.
We need to support recursive lookup due to code names referencing a
generic name referencing a specific model revision.
List aliases separately for -cpu ?.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
To repurpose the POWERPC_DEF_SVR() macro outside of an array,
move the comma into the macro. No functional change.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
It is within a large TARGET_PPC64 section from 970 to 620,
so an #endif /* TARGET_PPC64 */ is confusing. Clean this up.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Commit fe828a4d4b added a new fatal error
message while QOM realize'ification was in flight.
Convert it to return an Error instead of exit()ing.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Unlike derived PVR constants mapped to CPU_POWERPC_G2LEgp3, the
"G2leGP3" model definition itself used the CPU_POWERPC_G2LEgp1 PVR.
Fixing this will allow to alias CPU_POWERPC_G2LEgp3-using types to
"G2leGP3".
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
It was defined to ..._MPC8545E_v21 rather than ..._MPC8547E_v21.
Due to both resolving to CPU_POWERPC_e500v2_v21 this did not show.
Fixing this nontheless helps with QOM'ifying CPU aliases.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
The gen_icount_start/end functions are now somewhat misnamed since they
are useful for generic "start/end of TB" code, used for more than just
icount. Rename them to gen_tb_start/end.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Introduce ENV_OFFSET macros which can be used in non-target-specific
code that needs to generate TCG instructions which reference CPUState
fields given the cpu_env register that TCG targets set up with a
pointer to the CPUArchState struct.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
While ~T0+T1+CF = T1-T0+CF-1 is true for the low 32-bits,
it does not produce the correct carry-out to bit 33. Do
exactly what the manual says.
Cc: Alexander Graf <agraf@suse.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Which means that callers need not copy data into local tmps.
Cc: Alexander Graf <agraf@suse.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
In preparation for more efficient setting of these fields.
Cc: Alexander Graf <agraf@suse.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
The target-specific ENV_GET_CPU() macros have allowed us to navigate
from CPUArchState to CPUState. The reverse direction was not supported.
Avoid introducing CPU_GET_ENV() macros by initializing an untyped
pointer that is initialized in derived instance_init functions.
The field may not be called "env" due to it being poisoned.
Acked-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Adapt ppc_cpu_realize() signature, hook it up to DeviceClass and set
realized = true in cpu_ppc_init().
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
CPUs are never added to the composition tree, so delete is achieved
simply by removing the last references to them.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Since HWADDR_PRIx is always the same now, use %016 for TARGET_PPC64 and
%08 for common code. This may slightly change the ppc64 debug output.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
In r5949 / 76db3ba44e (target-ppc: memory
load/store rework) variable little_endian was replaced with ctx.le_mode.
Update the debug code.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
The bit that makes a dcbz instruction a dcbzl instruction was declared as
reserved in ppc32 ISAs. However, hardware simply ignores the bit, making
code valid if it simply invokes dcbzl instead of dcbz even on 750 and G4.
Thus, mark the bit as unreserved so that we properly emulate a simple dcbz
in case we're running on non-G5s.
While at it, also refactor the code to check the 970 special case during
runtime. This way we don't need to differenciate between a 970 dcbz and
any other dcbz anymore. We also allow for future improvements to add e500mc
dcbz handling.
Reported-by: Amadeusz Sławiński <amade@asmblr.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
Introduce CPUClass::class_by_name and add a default implementation.
Hook up the alpha and ppc implementations.
Introduce a wrapper function cpu_class_by_name().
Signed-off-by: Andreas Färber <afaerber@suse.de>
This will allow each architecture to define how the VCPU ID is set on
the KVM_CREATE_VCPU ioctl call.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Acked-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Currently the target-ppc tcg code only supports a single thread. You can
specify more, but they're treated identically to multiple cores. On KVM
we obviously can't support more threads than the hardware; if more are
specified it will cause strange and cryptic errors.
This patch clarifies the situation by giving a simple meaningful error if
more threads are specified than we can support.
Signed-off-by: Mike Qiu <qiudayu@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Even though our -cpu types for e500mc and e5500 are no real CPUs that
actually have version registers, a guest might still want to access
said version register and that has to succeed for a guest to be happy.
So let's expose a zero SVR value on E500_SVR SPR reads.
Signed-off-by: Alexander Graf <agraf@suse.de>
Note that target-alpha accesses this field from TCG, now using a
negative offset. Therefore the field is placed last in CPUState.
Pass PowerPCCPU to [kvm]ppc_fixup_cpu() to facilitate this change.
Move common parts of mips cpu_state_reset() to mips_cpu_reset().
Acked-by: Richard Henderson <rth@twiddle.net> (for alpha)
[AF: Rebased onto ppc CPU subclasses and openpic changes]
Signed-off-by: Andreas Färber <afaerber@suse.de>
Previously we silently exited, with subclasses we got an opcode warning.
Instead, explicitly tell the user what's wrong.
An indication for this is -cpu ? showing "host" with an all-zero PVR.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Since the model list is highly macrofied, keep ppc_def_t for now and
save a pointer to it in PowerPCCPUClass. This results in a flat list of
subclasses including aliases, to be refined later.
Move cpu_ppc_init() to translate_init.c and drop helper.c.
Long-term the idea is to turn translate_init.c into a standalone cpu.c.
Inline cpu_ppc_usable() into type registration.
Split cpu_ppc_register() in two by code movement into the initfn and
by turning the remaining part into a realizefn.
Move qemu_init_vcpu() call into the new realizefn and adapt
create_ppc_opcodes() to return an Error.
Change ppc_find_by_pvr() -> ppc_cpu_class_by_pvr().
Change ppc_find_by_name() -> ppc_cpu_class_by_name().
Turn -cpu host into its own subclass. This requires to move the
kvm_enabled() check in ppc_cpu_class_by_name() to avoid the class being
found via the normal name lookup in the !kvm_enabled() case.
Turn kvmppc_host_cpu_def() into the class_init and add an initfn that
asserts KVM is in fact enabled.
Implement -cpu ? and the QMP equivalent in terms of subclasses.
This newly exposes -cpu host to the user, ordered last for -cpu ?.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
We already used to support the external proxy facility of FSL MPICs,
but only implemented it halfway correctly.
This patch adds support for
* dynamic enablement of the EPR facility
* interrupt acknowledgement only when the interrupt is delivered
This way the implementation now is closer to real hardware.
Signed-off-by: Alexander Graf <agraf@suse.de>
On e500mc, the platform doesn't provide a way for the CPU to go idle.
To still not uselessly burn CPU time, expose an idle hypercall to the guest
if kvm supports it.
Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>
[agraf: adjust for current code base, add patch description, fix non-kvm case]
Signed-off-by: Alexander Graf <agraf@suse.de>
Book E does not play games with certain bits of xSRR1 being MSR save
bits and others being error status. xSRR1 is the old MSR, period.
This was causing things like MSR[CE] to be lost, even in the saved
version, as soon as you take an exception.
rfci/rfdi/rfmci are fixed to pass the actual xSRR1 register contents,
rather than the register number.
Put FIXME comments on the hack that is "asrr0/1". The whole point of
separate exception levels is so that you can, for example, take a machine
check or debug interrupt without corrupting critical-level operations.
The right xSRR0/1 set needs to be chosen based on CPU type flags.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Refactor common code around calls to cpu_restore_state().
tb_find_pc() has now no external users, make it static.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
The hwaddr type is somewhat vaguely defined as being able to contain bus
addresses on the widest possible bus in the system. For that reason it's
discouraged for representing specific pieces of persistent hardware state,
which should instead use an explicit width type that matches the bits
available in real hardware. In particular, because of the possibility that
the size of hwaddr might change if different buses are added to the target
in future, it's not suitable for use in vm state descriptions for savevm
and migration.
This patch purges such unwise uses of hwaddr from the ppc target code,
which turns out to be just one. The ppcemb_tlb_t struct, used on a number
of embedded ppc models to represent a TLB entry contains a hwaddr for the
real address field. This patch changes it to be a fixed uint64_t which is
suitable enough for all machine types which use this structure.
Other uses of hwaddr in CPUPPCState turn out not to be problematic:
htab_base and htab_mask are just used for the convenience of the TCG code;
the underlying machine state is the SDR1 register, which is stored with
a suitable type already. Likewise the mpic_cpu_base field is only used
internally and does not represent fundamental hardware state which needs to
be saved.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch fixes bug 1031698 :
https://bugs.launchpad.net/qemu/+bug/1031698
If we look at the (truncated) translation of the conditional branch
instruction in the test submitted in the bug post, the call to the
exception helper is missing in the "bne-false" chunk of translated
code :
IN:
bne- 0x1800278
OUT:
0xb544236d: jne 0xb5442396
0xb5442373: mov %ebp,(%esp)
0xb5442376: mov $0x44,%ebx
0xb544237b: mov %ebx,0x4(%esp)
0xb544237f: mov $0x1800278,%ebx
0xb5442384: mov %ebx,0x25c(%ebp)
0xb544238a: call 0x827475a
^^^^^^^^^^^^^^^^^^
0xb5442396: mov %ebp,(%esp)
0xb5442399: mov $0x44,%ebx
0xb544239e: mov %ebx,0x4(%esp)
0xb54423a2: mov $0x1800270,%ebx
0xb54423a7: mov %ebx,0x25c(%ebp)
Indeed, gen_exception(ctx, excp) called by gen_goto_tb (called by
gen_bcond) changes ctx->exception's value to excp's :
gen_bcond()
{
gen_goto_tb(ctx, 0, ctx->nip + li - 4);
/* ctx->exception value is POWERPC_EXCP_BRANCH */
gen_goto_tb(ctx, 1, ctx->nip);
/* ctx->exception now value is POWERPC_EXCP_TRACE */
}
Making the following gen_goto_tb()'s test false during the second call :
if ((ctx->singlestep_enabled &
(CPU_BRANCH_STEP | CPU_SINGLE_STEP)) &&
ctx->exception == POWERPC_EXCP_BRANCH /* false...*/) {
target_ulong tmp = ctx->nip;
ctx->nip = dest;
/* ... and this is the missing call */
gen_exception(ctx, POWERPC_EXCP_TRACE);
ctx->nip = tmp;
}
So the patch simply adds the missing matching case, fixing our problem.
Signed-off-by: Julio Guerra <guerr@julio.in>
Signed-off-by: Alexander Graf <agraf@suse.de>
Pass around CPUArchState instead of using global cpu_single_env.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Acked-by: Richard Henderson <rth@twiddle.net>
Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Acked-by: Guan Xuetao <gxt@mprc.pku.edu.cn>
* 'trivial-patches' of git://github.com/stefanha/qemu:
pc: Drop redundant test for ROM memory region
exec: make some functions static
target-ppc: make some functions static
ppc: add missing static
vnc: add missing static
vl.c: add missing static
target-sparc: make do_unaligned_access static
m68k: Return semihosting errno values correctly
cadence_uart: More debug information
Conflicts:
target-m68k/m68k-semi.c
This patch adds some extra FPU state to CPUPPCState. Specifically,
fpscr is extended to a target_ulong bits, since some recent (64 bit)
CPUs now have more status bits than fit inside 32 bits. Also, we add
the 32 VSR registers present on CPUs with VSX (these extend the
standard FP regs, which together with the Altivec/VMX registers form a
64 x 128bit register file for VSX).
We don't actually support the instructions using these extra registers
in TCG yet, but we still need a place to store the state so we can
sync it with KVM and savevm/loadvm it. This patch updates the savevm
code to not fail on the extended state, but also does not actually
save it - that's a project for another patch.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
We change the storage of the VPA information to explicitly use fixed
size integer types which will make life easier for syncing this data with
KVM, which we will need in future.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[agraf: fix commit message]
Signed-off-by: Alexander Graf <agraf@suse.de>
For target-mips also change the return type to bool.
Make include paths for cpu-qom.h consistent for alpha and unicore32.
Signed-off-by: Andreas Färber <afaerber@suse.de>
[AF: Updated new target-openrisc function accordingly]
Acked-by: Richard Henderson <rth@twiddle.net> (for alpha)
Adapt emulate_spapr_hypercall() accordingly.
Needed for changing spapr_hypercall() argument type to PowerPCCPU.
Signed-off-by: Andreas Färber <afaerber@suse.de>
* 'ppc-for-upstream' of git://repo.or.cz/qemu/agraf: (22 commits)
PPC: pseries: Remove hack for PIO window
PPC: e500: Map PIO space into core memory region
xen_platform: convert PIO to new memory api read/write
vmport: convert PIO to new memory api read/write
serial: convert PIO to new memory api read/write
rtl8139: convert PIO to new memory api read/write
pckbd: convert PIO to new memory api read/write
pc port92: convert PIO to new memory api read/write
mc146818rtc: convert PIO to new memory api read/write
m48t59: convert PIO to new memory api read/write
i8254: convert PIO to new memory api read/write
es1370: convert PIO to new memory api read/write
virtio-pci: convert PIO to new memory api read/write
ac97: convert PIO to new memory api read/write
pseries: Implement qemu initiated shutdowns using EPOW events
target-ppc: Rework storage of VPA registration state
pseries: Don't allow duplicate registration of hcalls or RTAS calls
Add USB option in machine options
e500: Fix serial initialization
PPC: 440: Emulate DCBR0
...
With PAPR guests, hypercalls allow registration of the Virtual Processor
Area (VPA), SLB shadow and dispatch trace log (DTL), each of which allow
for certain communication between the guest and hypervisor. Currently, we
store the addresses of the three areas and the size of the dtl in
CPUPPCState.
The SLB shadow and DTL are variable sized, with the size being retrieved
from within the registered memory area at the hypercall time. This size
can later be overwritten with other information, however, so we need to
save the size as of registration time. We already do this for the DTL,
but not for the SLB shadow, so this patch fixes that.
In addition, we change the storage of the VPA information to use fixed
size integer types which will make life easier for syncing this data with
KVM, which we will need in future.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
The DCBR0 register on 440 is used to implement system reset. The same
register is used on 405 as well, so just reuse the code.
Signed-off-by: Alexander Graf <agraf@suse.de>
For all our PPC targets the physical address space is at least
36 bits, so drop an unnecessary preprocessor conditional check
on TARGET_PHYS_ADDR_SPACE_BITS (erroneously introduced as part
of the change from target_phys_addr_t to hwaddr). This brings
this bit of code into line with the way we handle the other
cases which were originally checking TARGET_PHYS_ADDR_BITS in
order to avoid compiler complaints about overflowing a 32 bit type.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Rename helper flags to the new ones. This is purely a mechanical change,
it's possible to use better flags by looking at the helpers.
Cc: Alexander Graf <agraf@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
target_phys_addr_t is unwieldly, violates the C standard (_t suffixes are
reserved) and its purpose doesn't match the name (most target_phys_addr_t
addresses are not target specific). Replace it with a finger-friendly,
standards conformant hwaddr.
Outstanding patchsets can be fixed up with the command
git rebase -i --exec 'find -name "*.[ch]"
| xargs s/target_phys_addr_t/hwaddr/g' origin
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
* 'ppc-for-upstream' of git://repo.or.cz/qemu/agraf: (35 commits)
PPC: KVM: Fix BAT put
PPC: e500: Only expose even TLB sizes in initial TLB
ppc/pseries: Reset VPA registration on CPU reset
pseries: Don't test for MSR_PR for hypercalls under KVM
PPC: e500: calculate initrd_base like dt_base
PPC: e500: increase DTC_LOAD_PAD
device tree: simplify dumpdtb code
fdt: move dumpdtb interpretation code to device_tree.c
target-ppc: Remove unused power_mode field from cpu state
pseries: Set hash table size based on RAM size
pseries: Remove unnecessary locking from PAPR hash table hcalls
ppc405_uc: Fix buffer overflow
target-ppc: KVM: Fix some kernel version edge cases for kvmppc_reset_htab()
pseries: Fix semantics of RTAS int-on, int-off and set-xive functions
pseries: Rework implementation of TCE bypass
pseries: Remove never used flags field from spapr vio devices
pseries: Remove XICS irq type enum type
pseries: Remove C bitfields from xics code
pseries: Small cleanup to H_CEDE implementation
pseries: Fix XICS reset
...
A terminal NUL is required by caller's use of strchr.
It's better not to use strncpy at all, since there is no need
to zero out hundreds of trailing bytes for each iteration.
Signed-off-by: Jim Meyering <meyering@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
In the sregs API, upper and lower 32bit segments of the BAT registers
are swapped when doing a set. Since we need to support old kernels out
there, don't bother to fix it in the kernel, but instead work around
the problem in QEMU by swapping on put.
Signed-off-by: Alexander Graf <agraf@suse.de>
The hassle and compile time overhead of maintaining both 32-bit and 64-bit
capable source isn't worth the tiny performance advantage which is seen on
a minority of configurations. Switch to compiling libhw only once, with
target_phys_addr_t unconditionally typedefed to uint64_t.
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
The ppc specific CPU state contains several variables which track the
VPA, SLB shadow and dispatch trace log. These are structures shared
between OS and hypervisor that are used on the pseries machine to track
various per-CPU quantities.
The address of these structures needs to be registered by the guest on each
boot, however currently this registration is not cleared when we reset the
cpu. This patch corrects this bug.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
CPUPPCState includes a variable 'power_mode' which is used nowhere. This
patch removes it. This includes saving a dummy zero in its place during
vmsave, to avoid breaking the save format.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
The kvmppc_reset_htab() function invokes the KVM_PPC_ALLOCATE_HTAB vm ioctl
to request KVM to allocate and reset a hash page table for the guest - it
returns the size of hash table allocated, or 0 to indicate that qemu needs
to allocate the hash table itself. In practice qemu needs to allocate the
htab for full emulation and with Book3sPR KVM, but the kernel has to
allocate it for Book3sHV KVM (the hash table needs to be physically
contiguous in that case).
Unfortunately, the logic in this function is incorrect for some existing
kernels. Specifically:
* at least some PR KVM versions advertise the relevant capability but
don't actually implement the ioctl(), returning ENOTTY.
* For old kernels which don't have the capability, we currently return 0.
This is correct for PV KVM, where we need to allocate the htab, but not for
HV KVM - kernels of this era always allocate a 16MB hash table per guest.
This patch corrects both of these edge cases.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
This adds support for then new "reset htab" ioctl which allows qemu
to properly cleanup the MMU hash table when the guest is reset. With
the corresponding kernel support, reset of a guest now works properly.
This also paves the way for indicating a different size hash table
to the kernel and for the kernel to be able to impose limits on
the requested size.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
At least when invoked with high enough 'level' arguments,
kvm_arch_put_registers() is supposed to copy essentially all the cpu state
as encoded in qemu's internal structures into the kvm state. Currently
the ppc version does not do this - it never calls KVM_SET_SREGS, for
example, and therefore never sets the SDR1 and various other important
though rarely changed registers.
Instead, the code paths which need to set these registers need to
explicitly make (conditional) kvm calls which transfer the changes to kvm.
This breaks the usual model of handling state updates in qemu, where code
just changes the internal model and has it flushed out to kvm automatically
at some later point.
This patch fixes this for Book S ppc CPUs by adding a suitable call to
KVM_SET_SREGS and als to KVM_SET_ONE_REG to set the HIOR (the only register
that is set with that call so far). This lets us remove the hacks to
explicitly set these registers from the kvmppc_set_papr() function.
The problem still exists for Book E CPUs (which use a different version of
the kvm_sregs structure). But fixing that has some complications of its
own so can be left to another day.
Lkewise, there is still some ugly code for setting the PVR through special
calls to SET_SREGS which is left in for now. The PVR needs to be set
especially early because it can affect what other features are available
on the CPU, so I need to do more thinking to see if it can be integrated
into the normal paths or not.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
We can finally get rid of the ugly HANDLE_NAN{1,2,3} macros.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
Use the new softfloat float32_muladd() function to implement the vmaddfp
and vnmsubfp instructions. As a bonus we can get rid of the call to the
HANDLE_NAN3 macro, as the NaN handling is directly done at the softfloat
level.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
Use the new softfloat float32_min() and float32_max() to implement the
vminfp and vmaxfp instructions. As a bonus we can get rid of the call to
the HANDLE_NAN2 macro, as the NaN handling is directly done at the
softfloat level.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
Commit e024e881bb provided a pickNaN()
function for PowerPC, implementing the correct NaN propagation rules.
Therefore there is no need to test the operands manually, we can rely
on the softfloat code to do that.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
For all targets that currently call tcg_gen_debug_insn_start,
add CPU_LOG_TB_OP_OPT to the condition that gates it.
This is useful for comparing optimization dumps, when the
pre-optimization dump is merely noise.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Altivec instructions are not working anymore in PowerPC emulation,
following commit d15f74fb, which inverted two registers in the call
to helper. Fix that.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Acked-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
* agraf/ppc-for-upstream: (24 commits)
openpic: Added BRR1 register
pseries: Update SLOF firmware image
pseries dma: DMA window params added to PHB and DT population changed
pseries: Add PCI MSI/MSI-X support
pseries: Add trace event for PCI irqs
pseries: Export find_phb() utility function for PCI code
pseries: added allocator for a block of IRQs
pseries: Separate PCI RTAS setup from common from emulation specific PCI setup
pseries: Rework irq assignment to avoid carrying qemu_irqs around
pseries: Remove extraneous prints
pseries: Update SLOF
PPC: spapr: Remove global variable
PPC: spapr: Rework VGA select logic
xbzrle: fix compilation on ppc32
spapr: Add support for -vga option
Add one new file vga-pci.h and cleanup on all platforms
Revert "PPC: e500: Use new MPIC dt format"
ppc: Fix bug in handling of PAPR hypercall exits
PPC: e500: add generic e500 platform
PPC: e500: split mpc8544ds machine from generic e500 code
...
mingw32 seems to want the declaration to also carry the weak attribute.
Strangely, gcc on Linux absolutely does not want the declaration to be marked
as weak. This may not be the right fix, but it seems to do the trick.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Currently for powerpc, kvm_arch_handle_exit() always returns 1, meaning
that its caller - kvm_cpu_exec() - will always exit immediately afterwards
to the loop in qemu_kvm_cpu_thread_fn().
There's no need to do this. Once we've handled the hypercall there's no
reason we can't go straight around and KVM_RUN again, which is what ret = 0
will signal. The only exception might be for hypercalls which affect the
state of cpu_can_run(), however the only one that might do this is H_CEDE
and for kvm that is always handled in the kernel, not qemu.
Furtherm setting ret = 0 means that when exit_requested is set from a
hypercall, we will enter KVM_RUN once more with a signal which lets the
the kernel do its internal logic to complete the hypercall with out
actually executing any more guest code. This is important if our hypercall
also triggered a reset, which previously would re-initialize everything
without completing the hypercall. This caused the kernel to get confused
because it thought the guest was still in the middle of a hypercall when
it has actually been reset.
This patch therefore changes to ret = 0, which is both a bugfix and a small
optimization.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
The pseries platform already contains an IOMMU implementation, since it is
essential for the platform's paravirtualized VIO devices. This IOMMU
support is currently built into the implementation of the VIO "bus" and
the various VIO devices.
This patch converts this code to make use of the new common IOMMU
infrastructure.
We don't yet handle synchronization of map/unmap callbacks vs. invalidations,
this will require some complex interaction with the kernel and is not a
major concern at this stage.
Cc: Alex Graf <agraf@suse.de>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
This fixes a compiler error when QEMU was configured with --enable-debug.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
On 64bit capable systems, MAS2 can actually hold a 64bit virtual page
address. So increase the mask for its EPN.
Signed-off-by: Alexander Graf <agraf@suse.de>
The MAS registers on BookE are all 32 bit wide, except for MAS2, which
can hold up to 64 bit on 64 bit capable CPUs. Reflect this in the SPR
setting code, so that the guest can never write invalid values in them.
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch moves the debug #ifdef'ed SPR trace generation into its
own function, so we can call it from multiple places.
Signed-off-by: Alexander Graf <agraf@suse.de>
IVPR can either hold 32 or 64 bit addresses, depending on the CPU type. Let
the CPU initialization function pass in its mask itself, so we can easily
extend it.
Signed-off-by: Alexander Graf <agraf@suse.de>
On the e500 series, accessing SPR_EPR magically turns into an access at
that CPU's IACK register on the MPIC. Implement that logic to get kernels
that make use of that feature work.
Signed-off-by: Alexander Graf <agraf@suse.de>
The BookE variant of MSR_SF is MSR_CM. Implement everything it takes in TCG to
support running 64bit code with MSR_CM set.
Signed-off-by: Alexander Graf <agraf@suse.de>
The number of SPRs avaiable in different PowerPC chip is still increasing. Add
definitions for the MAS7_MAS3 SPR and all currently known bits in EPCR.
Signed-off-by: Alexander Graf <agraf@suse.de>
More recent Power server chips (i.e. based on the 64 bit hash MMU)
support more than just the traditional 4k and 16M page sizes. This
can get quite complicated, because which page sizes are supported,
which combinations are supported within an MMU segment and how these
page sizes are encoded both in the SLB entry and the hash PTE can vary
depending on the CPU model (they are not specified by the
architecture). In addition the firmware or hypervisor may not permit
use of certain page sizes, for various reasons. Whether various page
sizes are supported on KVM, for example, depends on whether the PR or
HV variant of KVM is in use, and on the page size of the memory
backing the guest's RAM.
This patch adds information to the CPUState and cpu defs to describe
the supported page sizes and encodings. Since TCG does not yet
support any extended page sizes, we just set this to NULL in the
static CPU definitions, expanding this to the default 4k and 16M page
sizes when we initialize the cpu state. When using KVM, however, we
instead determine available page sizes using the new
KVM_PPC_GET_SMMU_INFO call. For old kernels without that call, we use
some defaults, with some guesswork which should do the right thing for
existing HV and PR implementations. The fallback might not be correct
for future versions, but that's ok, because they'll have
KVM_PPC_GET_SMMU_INFO.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
The size of EPN field in MAS2 depends on page size. This patch adds a
mask to discard invalid bits in EPN field.
Definition of EPN field from e500v2 RM:
EPN Effective page number: Depending on page size, only the bits
associated with a page boundary are valid. Bits that represent offsets
within a page are ignored and should be cleared.
There is a similar (but more complicated) definition in PowerISA V2.06.
Signed-off-by: Fabien Chouteau <chouteau@adacore.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Lookup table 'hbrev' is never written to, so add a 'const' qualifier.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Add an explicit CPUPPCState parameter instead of relying on AREG0
and rename op_helper.c (which only contains load and store helpers)
to mem_helper.c. Remove AREG0 swapping in
tlb_fill().
Switch to AREG0 free mode. Use cpu_ld{l,uw}_code in translation
and interrupt handling, cpu_{ld,st}{l,uw}_data in loads and stores.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Move more misc helpers from helper.c to misc_helper.c.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Add an explicit CPUPPCState parameter instead of relying on AREG0.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Move misc helpers from op_helper.c to misc_helpers.c.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Add an explicit CPUPPCState parameter instead of relying on AREG0.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Move decrementer and timebase helpers to a dedicated file.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Remove useless wrappers. In some cases 'int' parameters are
changed to uint32_t.
Make internal functions static.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
[agraf: fix kvm compilation]
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Move more MMU helpers from helper.c to mmu_helper.c.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
[update to current helper.c state]
Signed-off-by: Alexander Graf <agraf@suse.de>
When the code is moved together by the next patch, compiler
detects a possible uninitialized variable use. Avoid the warning
by initializing the variables.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Add an explicit CPUPPCState parameter instead of relying on AREG0.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Move MMU, TLB, SLB and BAT ops to mmu_helper.c.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Add an explicit CPUPPCState parameter instead of relying on AREG0.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
[fix unwanted whitespace line in Makefile.target]
Signed-off-by: Alexander Graf <agraf@suse.de>
Move integer and vector ops to int_helper.c.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Add an explicit CPUPPCState parameter instead of relying on AREG0.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Move FPU and SPE helpers from op_helper.c to fpu_helper.c.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Move exception helpers from helper.c to excp_helper.c and
make cpu_dump_rfi() static.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
helper.c will be spilt by the next patches, fix
style issues before that.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Add an explicit CPUPPCState parameter instead of relying on AREG0.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Move exception helpers from op_helper.c to excp_helper.c.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
op_helper.c will be split by the next patches, fix
style issues before that.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
The file is located in target-ppc/, not hw/.
Signed-off-by: Andreas Färber <andreas.faerber@web.de>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Anthony Liguori <anthony@codemonkey.ws>
Cc: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
In commit 1bba0dc932 cpu_reset()
was renamed to cpu_state_reset(), to allow introducing a new cpu_reset()
that would operate on QOM objects.
All callers have been updated except for one in target-mips, so drop all
implementations except for the one in target-mips and move the
declaration there until MIPSCPU reset can be fully QOM'ified.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Acked-by: Michael Walle <michael@walle.cc> (for lm32)
Acked-by: Max Filippov <jcmvbkbc@gmail.com> (for xtensa)
Acked-by: Edgar E. Iglesias <edgar.iglesias@gmail.com> (for mb + cris)
Acked-by: Alexander Graf <agraf@suse.de> (for ppc)
Acked-by: Blue Swirl <blauwirbel@gmail.com>
Adapt e500 mpc8544ds machine accordingly.
Turn cpu_init() into a static inline function returning CPUPPCState for
backwards compatibility.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Acked-by: Alexander Graf <agraf@suse.de>
When initializing the e500 code, we need to expose its
cache line size for user and system mode, while the mmu
details are only interesting for system emulation.
Split the 2 switch statements apart, allowing us to #ifdef
out the mmu parts for user mode emulation while keeping all
cache information consistent.
Signed-off-by: Alexander Graf <agraf@suse.de>
machine.c is only compiled for softmmu targets, so checks for
!defined(CONFIG_USER_ONLY) are unnecessary and can be dropped.
Signed-off-by: Juan Quintela <quintela@redhat.com>
[AF: Use more verbose commit message suggested by PMM]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Andreas Färber <afaerber@suse.de>
commit f7aa558396 pulled the dcache and icache
line size initialization inside of a '#if !defined(CONFIG_USER_ONLY)' block.
This is not correct because instructions like 'dcbz' need the dcache size
initialized even for user mode.
Signed-off-by: Meador Inge <meadori@codesourcery.com>
Cc: Varun Sethi <Varun.Sethi@freescale.com>
[AF: Simplify #ifdefs by using cache line size 32 for *-user as before]
Suggested-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Move code from cpu_state_reset() into ppc_cpu_reset().
Reorder #include of helper_regs.h to use it in translate_init.c.
Adjust whitespace and add braces.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Move code not dependent on ppc_def_t from cpu_ppc_init() into an initfn.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Embed CPUPPCState as first member of PowerPCCPU.
Distinguish between "powerpc-cpu", "powerpc64-cpu" and
"embedded-powerpc-cpu".
Let CPUClass::reset() call cpu_state_reset() for now.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
On target-ppc, our table of CPU types and features encodes the features as
found on the hardware, regardless of whether these features are actually
usable under TCG or KVM. We already have cases where the information from
the cpu table must be fixed up to account for limitations in the emulation
method we're using. e.g. TCG does not support the DFP and VSX instructions
and KVM needs different numbering of the CPUs in order to tell it the
correct thread to core mappings.
This patch cleans up these hacks to handle emulation limitations by
consolidating them into a pair of functions specifically for the purpose.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[AF: Style and typo fixes, rename new functions and drop ppc_def_t arg]
Signed-off-by: Andreas Färber <afaerber@suse.de>
Commit 41557447d3 also introduced a subtle TLB
flush bug. By applying a mask to the interrupt MSR which cleared the IR/DR
bits at the start of the interrupt handler, the logic towards the end of the
handler to force a TLB flush if either one of these bits were set would never
be triggered.
This patch simply changes the IR/DR bit check in the TLB flush logic to use
the original MSR value (albeit with some interrupt-specific bits cleared) so
that the IR/DR bits are preserved at the point where the check takes place.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Use uintptr_t instead of void * or unsigned long in
several op related functions, env->mem_io_pc and
GETPC() macro.
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
The official spelling is QEMU.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Andreas Färber <afaerber@suse.de>
[blauwirbel@gmail.com: fixed comment style in hw/sun4m.c]
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
When we dump the CPU registers, there's a certain chance they haven't been
synchronized with KVM yet, so we have to manually trigger that.
This aligns the code with x86 and fixes a bug where the register state was
bogus on invalid/unknown kvm exit reasons.
Reported-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
'POWERPC_INSNS2_DEFAULT' was defined incorrectly which was causing the
opcode table creation code to erroneously register 'eieio' and 'mbar'
for the "default" processor:
** ERROR: opcode 1a already assigned in opcode table 16
*** ERROR: unable to insert opcode [1f-16-1a]
*** ERROR initializing PowerPC instruction 0x1f 0x16 0x1a
Signed-off-by: Meador Inge <meadori@codesourcery.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Fix large page support in TCG. The old code would overwrite the large page
table entry with the fake 4 KB one generated here whenever the ref/change bits
were updated, causing it to point to the wrong area of memory.
Signed-off-by: Nathan Whitehorn <nwhitehorn@freebsd.org>
Acked-by: David Gibson <david@gibson.drobpear.id.au>
[agraf: fix whitespace, braces]
Signed-off-by: Alexander Graf <agraf@suse.de>
The POWER7 emulation is missing the Processor Identification Register,
mandatory in recent POWER CPUs, that is required for SMP on at least
some operating systems (e.g. FreeBSD) to function properly. This patch
copies the existing PIR code from the other CPUs that implement it.
Signed-off-by: Nathan Whitehorn <nwhitehorn@freebsd.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
These instructions for loading and storing byte-swapped 64-bit values have
been introduced in PowerISA 2.06.
Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
For the pseries machine, TCE (IOMMU) tables can either be directly
malloc()ed in qemu or, when running on a KVM which supports it, mmap()ed
from a KVM ioctl. The latter option is used when available, because it
allows the (frequent bottlenext) H_PUT_TCE hypercall to be KVM accelerated.
However, even when KVM is persent, TCE acceleration is not always possible.
Only KVM HV supports this ioctl(), not KVM PR, or the kernel could run out
of contiguous memory to allocate the new table. In this case we need to
fall back on the malloc()ed table.
When a device is removed, and we need to remove the TCE table, we need to
either munmap() or free() the table as appropriate for how it was
allocated. The code is supposed to do that, but we buggily fail to
initialize the tcet->fd variable in the malloc() case, which is used as a
flag to determine which is the right choice.
This patch fixes the bug, and cleans up error messages relating to this
path while we're at it.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Scripted conversion:
for file in *.[hc] hw/*.[hc] hw/kvm/*.[hc] linux-user/*.[hc] linux-user/m68k/*.[hc] bsd-user/*.[hc] darwin-user/*.[hc] tcg/*/*.[hc] target-*/cpu.h; do
sed -i "s/CPUState/CPUArchState/g" $file
done
All occurrences of CPUArchState are expected to be replaced by QOM CPUState,
once all targets are QOM'ified and common fields have been extracted.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Scripted conversion:
sed -i "s/CPUState/CPUPPCState/g" target-ppc/*.[hc]
sed -i "s/#define CPUPPCState/#define CPUState/" target-ppc/cpu.h
Signed-off-by: Andreas Färber <afaerber@suse.de>
Acked-by: Anthony Liguori <aliguori@us.ibm.com>
Frees the identifier cpu_reset for QOM CPUs (manual rename).
Don't hide the parameter type behind explicit casts, use static
functions with strongly typed argument to indirect.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
On ppc405ep there is a register that allows for software to reset the
core, but not the whole system. Implement this reset using a reset
interrupt.
This gets rid of a bunch of #if 0'ed code.
Reported-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Fix this error:
/src/qemu/target-ppc/helper.c: In function 'booke206_tlb_to_page_size':
/src/qemu/target-ppc/helper.c:1296:14: error: variable 'tlbncfg' set but not used [-Werror=unused-but-set-variable]
Tested-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
When running Linux on e500 with powersave-nap enabled, Linux tries to
read out the L1CFG0 register and calculates some things from it. Passing
0 there ends up in a division by 0, resulting in -1, resulting in badness.
So let's populate the L1CFG0 register with reasonable defaults. That way
guests aren't completely confused.
Reported-by: Shrijeet Mukherjee <shm@cumulusnetworks.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
The e500mc implements Embedded.Processor Control, so enable it and
thus enable guests to IPI each other. This makes -smp work with -cpu
e500mc.
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch implements the msgsnd instruction. It is part of the
Embedded.Processor Control specification and allows one CPU to
IPI another CPU without going through an interrupt controller.
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch implements the msgclr instruction. It is part of the
Embedded.Processor Control specification and clears pending doorbell
interrupts on the current CPU.
Signed-off-by: Alexander Graf <agraf@suse.de>
We already had all the code available to have doorbell exceptions
be handled properly. It was just disabled.
Enable it, so we can rely on it.
Signed-off-by: Alexander Graf <agraf@suse.de>
We're going to introduce doorbell instructions (called processor
control in the spec) soon. Add some defines for easier patch
readability later.
Signed-off-by: Alexander Graf <agraf@suse.de>
Our EXCP list is getting outdated. By now, 3 new exception vectors have
been introduced. Update the list so we have everything at one place.
Signed-off-by: Alexander Graf <agraf@suse.de>
We can have TLBs that only support a single page size. This is defined
by the absence of the AVAIL flag in TLBnCFG. If this is the case, we
currently write invalid size info into the TLB, but override it on
internal fault.
Let's move the check over to tlbwe, so we don't have the AVAIL check in
the hotter fault path.
Signed-off-by: Alexander Graf <agraf@suse.de>
Our internal helpers to fetch TLB entries were not able to tell us
that an entry doesn't even exist. Pass an error out if we hit such
a case to not accidently pass beyond the TLB array.
Signed-off-by: Alexander Graf <agraf@suse.de>
The PowerPC 2.06 BookE ISA defines an opcode called "tlbilx" which is used
to flush TLB entries. It's the recommended way of flushing in virtualized
environments.
So far we got away without implementing it, but Linux for e500mc uses this
instruction, so we better add it :).
Signed-off-by: Alexander Graf <agraf@suse.de>
When setting a TLB entry, we need to check if the TLB we're putting it in
actually supports the given size. According to the 2.06 PowerPC ISA, a
value that's out of range can either be redefined to something implementation
dependent or we can raise an illegal opcode exception. We do the latter.
Signed-off-by: Alexander Graf <agraf@suse.de>
When using MAV 2.0 TLB registers, we have another range of TLB registers
available to read the supported page sizes from.
Add SPR definitions for those and add a helper function that we can use
to receive such a bitmap even when using MAV 1.0.
Signed-off-by: Alexander Graf <agraf@suse.de>
We might want to call the tlb check function without actually caring about
the real address resolution. Check if we really should write the value
back.
Signed-off-by: Alexander Graf <agraf@suse.de>
The msync instruction as defined today is only valid on 4xx cores, not
on e500 which also supports msync, but treats it the same way as sync.
Rename it to reflect that it's 4xx only.
Signed-off-by: Alexander Graf <agraf@suse.de>
The e500 CPUs don't use 440's msync which falls on the same opcode IDs,
but instead use the real powerpc sync instruction. This is important,
since the invalid mask differs between the two.
Signed-off-by: Alexander Graf <agraf@suse.de>
Our code only knows IVORs up to 37. Add the new ones defined in ISA 2.06
from 38 - 42.
Signed-off-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Unfortunately the HIOR setting code slipped into upstream QEMU
before it was pulled into upstream KVM. And since Murphy is always
right, comments on the patches only emerged on the pull request
leading to changes in the interface.
So here's an update to the HIOR setting. While at it, I also relaxed
it a bit since for HV KVM we can already run fine without and 3.2
works just fine with HV KVM but when not setting HIOR. We will only
need this when running PAPR in PR KVM.
Since we accidently changed the ABI and API along the way, we have
to update the underlying kernel headers together with the code that
uses it to not break bisectability.
Signed-off-by: Alexander Graf <agraf@suse.de>
Now that we have 440 TLB emulation, we can also support running the 440EP
CPU target in system emulation mode.
Signed-off-by: Alexander Graf <agraf@suse.de>
Commit c5705a772 ("vmstate, memory: decouple vmstate from memory API") changed
the signature of memory_region_init_ram_ptr() but did not update a caller in
the ppc kvm module. Fix.
Signed-off-by: Avi Kivity <avi@redhat.com>
This core is found on chips such as p4080, p3041, p2040, and p5020.
More needs to be done to make this viable for TCG (such as missing SPRs
and instructions), but this suffices to get KVM running with appropriate
kernel support.
Signed-off-by: Varun Sethi <Varun.Sethi@freescale.com>
[scottwood@freescale.com: tweak some flags]
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
When guest reset, we need to halt secondary cpus until guest kick them.
This already works for tcg. The patch add the support for kvm.
Signed-off-by: Liu Yu <yu.liu@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
[agraf: remove in-kernel irqchip code]
When run with a PPC Book3S (server) CPU Currently 'info tlb' in the
qemu monitor reports "dump_mmu: unimplemented". However, during
bringup work, it can be quite handy to have the SLB entries, which are
available in the CPUPPCState. This patch adds an implementation of
info tlb for book3s, which dumps the SLB.
Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Alexander Graf <agraf@suse.de>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Dong Xu Wang <wdongxu@linux.vnet.ibm.com>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
When using gdb to single step a ppc interrupt routine, the execution
flow passes the rfi instruction without actually returning from the
interrupt.
The patch fixes this by avoiding to update the nip when the debug
exception is raised and a previous POWERPC_EXCP_SYNC was set.
The latter is the case only, if code for rfi or a related instruction
was generated.
Signed-off-by: Sebastian Bauer <mail@sebastianbauer.info>
Signed-off-by: Alexander Graf <agraf@suse.de>
The CPU state contains two bitmaps, initialized from the CPU spec
which describes which instructions are implemented on the CPU. A
couple of bits are defined which cover instructions (VSX and DFP)
which are not currently implemented in TCG. So far, these are only
used to handle the case of -cpu host because a KVM guest can use
the instructions when the host CPU supports them.
However, it's a mild layering violation to simply not include those
bits in the CPU descriptions for those CPUs that do support them,
just because we can't handle them in TCG. This patch corrects the
situation, so that the instruction bits _are_ shown correctly in the
cpu spec table, but are masked out from the cpu state in the non-KVM
case.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Sufficiently recent kernels include a KVM call to accelerate use of
PAPR TCE tables (IOMMU), which are used by PAPR virtual IO devices.
This involves qemu mapping the TCE table in from a kernel obtained fd,
which currently we do with PROT_READ only. This is a hangover from
early (never released) versions of this kernel interface which only
permitted read-only mappings and required us to destroy and recreate
the table when we needed to clear it from qemu.
Now, the kernel permits read-write mappings, and we rely on this to
clear the table in spapr_vio_quiesce_one(). However, due to
insufficient testing, I forgot to update the actual mapping of the
table in kvmppc_create_spapr_tce() to add PROT_WRITE to the mmap().
This patch corrects the oversight.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
The -cpu host feature tries to find out the host capabilities based
on device tree information. However, we don't always have that available
because it's an optional property in dt.
So instead of force unsetting values depending on an unreliable source
of information, let's just try to be clever about it and not override
capabilities when we don't know the device tree pieces.
This fixes altivec with -cpu host on YDL PowerStations.
Reported-by: Nishanth Aravamudan <nacc@us.ibm.com>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
The recent usage of MemoryRegion in kvm_ppc.h breaks builds with
CONFIG_USER_ONLY=y. This patch fixes it.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Currently, when KVM is enabled, the pseries machine checks if the host
CPU supports VMX, VSX and/or DFP instructions and advertises
accordingly in the guest device tree. It does this regardless of what
CPU is selected on the command line. On the other hand, when in TCG
mode, it never advertises any of these facilities, even basic VMX
(Altivec) which is supported in TCG.
Now that we have a -cpu host option for ppc, it is fairly
straightforward to fix both problems. This patch changes the -cpu
host code to override the basic cpu spec derived from the PVR with
information queried from the host avout VMX, VSX and DFP capability.
The pseries code then uses the instruction availability advertised in
the cpu state to set the guest device tree correctly for both the KVM
and TCG cases.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
The sole reason we have the ppcemb target is to support MMUs that have
less than the usual 4k possible page size. There are very few of these
chips and I don't want to add additional QA and testing burden to everyone
to ensure that code still works when TARGET_PAGE_SIZE is not 4k.
So this patch disables all CPUs except for MMU_BOOKE capable ones from
the ppcemb target.
Signed-off-by: Alexander Graf <agraf@suse.de>
Some 32-bit PPC CPUs can use up to 36 bit of physical address space.
Treat them accordingly in the qemu-system-ppc binary type.
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch adds cpu specs to the table for POWER7 revisions 2.1 and 2.3.
This allows -cpu host to be used on these host cpus.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
For convenience with kvm, x86 allows the user to specify -cpu host on the
qemu command line, which means make the guest cpu the same as the host
cpu. This patch implements the same option for ppc targets.
For now, this just read the host PVR (Processor Version Register) and
selects one of our existing CPU specs based on it. This means that the
option will not work if the host cpu is not supported by TCG, even if that
wouldn't matter for use under kvm.
In future, we can extend this in future to override parts of the cpu spec
based on information obtained from the host (via /proc/cpuinfo, the host
device tree, or explicit KVM calls). That will let us handle cases where
the real kvm-virtualized CPU doesn't behave exactly like the TCG-emulated
CPU. With appropriate annotation of the CPU specs we'll also then be able
to use host cpus under kvm even when there isn't a matching full TCG model.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
The ppc target contains a ppc_find_by_pvr() function, which looks up a
CPU spec based on a PVR (that is, based on the value in the target cpu's
Processor Version Register). PVR values contain information on both the
cpu model (upper 16 bits, usually) and on the precise revision (low 16
bits, usually).
ppc_find_by_pvr, as well as making exact PVR matches, attempts to find
"close" PVR matches, when we don't have a CPU spec for the exact revision
specified. This sounds like a good idea, execpt that the current logic
is completely nonsensical.
It seems to assume CPU families are subdivided bit by bit in the PVR in a
way they just aren't. Specifically, it requires a match on all bits of the
specified pvr up to the last non-zero bit. This has the bizarre effect
that when the low bits are simply a sequential revision number (a common
though not universal pattern), then odd specified revisions must be matched
exactly, whereas even specified revisions will also match the next odd
revision, likewise for powers of 4, 8 and so forth.
To correctly do inexact matching we'd need to re-organize the table of CPU
specs to include a mask showing what PVR range the spec is compatible with
(similar to the cputable code in the Linux kernel).
For now, just remove the bogosity by only permitting exact PVR matches.
That at least makes the matching simple and consistent. If we need inexact
matching we can add the necessary per-subfamily masks later.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Sufficiently recent PAPR specifications define properties "ibm,vmx"
and "ibm,dfp" on the CPU node which advertise whether the VMX vector
extensions (or the later VSX version) and/or the Decimal Floating
Point operations from IBM's recent POWER CPUs are available.
Currently we do not put these in the guest device tree and the guest
kernel will consequently assume they are not available. This is good,
because they are not supported under TCG. VMX is similar enough to
Altivec that it might be trivial to support, but VSX and DFP would
both require significant work to support in TCG.
However, when running under kvm on a host which supports these
instructions, there's no reason not to let the guest use them. This
patch, therefore, checks for the relevant support on the host CPU
and, if present, advertises them to the guest as well.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Currently the kvmppc_get_clockfreq() function reads the host's clock
frequency from /proc/device-tree, which is useful to past to the guest
in KVM setups. However, there are some other host properties
advertised in the device tree which can also be relevant to the
guests.
This patch, therefore, replaces kvmppc_get_clockfreq() which can
retrieve any named, single integer property from the host device
tree's CPU node.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
SPE instructions are defined by pairs. Currently, the invalid-bits mask is set
for the first instruction, but the second one can have a different mask.
example:
GEN_SPE(efdcmpeq, efdcfs, 0x17, 0x0B, 0x00600000, 0x00180000, PPC_SPE_DOUBLE),
Signed-off-by: Fabien Chouteau <chouteau@adacore.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
The pseries machine of qemu implements the TCE mechanism used as a
virtual IOMMU for the PAPR defined virtual IO devices. Because the
PAPR spec only defines a small DMA address space, the guest VIO
drivers need to update TCE mappings very frequently - the virtual
network device is particularly bad. This means many slow exits to
qemu to emulate the H_PUT_TCE hypercall.
Sufficiently recent kernels allow this to be mitigated by implementing
H_PUT_TCE in the host kernel. To make use of this, however, qemu
needs to initialize the necessary TCE tables, and map them into itself
so that the VIO device implementations can retrieve the mappings when
they access guest memory (which is treated as a virtual DMA
operation).
This patch adds the necessary calls to use the KVM TCE acceleration.
If the kernel does not support acceleration, or there is some other
error creating the accelerated TCE table, then it will still fall back
to full userspace TCE implementation.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
At present, using the hypervisor aware Book3S-HV KVM will only work
with qemu on POWER7 CPUs. PPC970 CPUs also have hypervisor
capability, but they lack the VRMA feature which makes assigning guest
memory easier.
In order to allow KVM Book3S-HV on PPC970, we need to specially
allocate the first chunk of guest memory (the "Real Mode Area" or
RMA), so that it is physically contiguous.
Sufficiently recent host kernels allow such contiguous RMAs to be
allocated, with a kvm capability advertising whether the feature is
available and/or necessary on this hardware. This patch enables qemu
to use this support, thus allowing kvm acceleration of pseries qemu
machines on PPC970 hardware.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
---
agraf: fix to use memory api
Alex Graf has already made qemu support KVM for the pseries machine
when using the Book3S-PR KVM variant (which runs the guest in
usermode, emulating supervisor operations). This code allows gets us
very close to also working with KVM Book3S-HV (using the hypervisor
capabilities of recent POWER CPUs).
This patch moves us another step towards Book3S-HV support by
correctly handling SMT (multithreaded) POWER CPUs. There are two
parts to this:
* Querying KVM to check SMT capability, and if present, adjusting the
cpu numbers that qemu assigns to cause KVM to assign guest threads
to cores in the right way (this isn't automatic, because the POWER
HV support has a limitation that different threads on a single core
cannot be in different guests at the same time).
* Correctly informing the guest OS of the SMT thread to core mappings
via the device tree.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
While working on the emulation of the freescale p2010 (e500v2) I realized that
there's no implementation of booke's timers features. Currently mpc8544 uses
ppc_emb (ppc_emb_timers_init) which is close but not exactly like booke (for
example booke uses different SPR).
Signed-off-by: Fabien Chouteau <chouteau@adacore.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
When running with PR KVM, we need to set HIOR directly. Thankfully there
is now a new interface to set registers individually so we can just use that
and poke HIOR into the guest vcpu's HIOR register.
While at it, this also sets SDR1 because -M pseries requires it to run.
With this patch, -M pseries works properly with PR KVM.
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch implements support for the CFAR SPR on POWER7 (Come From
Address Register), which snapshots the PC value at the time of a branch or
an rfid. The latest powerpc-next kernel also catches it and can show it in
xmon or in the signal frames.
This works well enough to let recent kernels boot (which otherwise oops
on the CFAR access). It hasn't been tested enough to be confident that the
CFAR values are actually accurate, but one thing at a time.
Signed-off-by: Ben Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
This definition is backward compatible with MAV=1.0 as long as
the guest does not set reserved bits in MAS1/MAS4.
Also, fix the shift in booke206_tlb_to_page_size -- it's the base
that should be able to hold a 4G page size, not the shift count.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Share the TLB array with KVM. This allows us to set the initial TLB
both on initial boot and reset, is useful for debugging, and could
eventually be used to support migration.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
When running PR style KVM, we need to tell the kernel that we want
to run in PAPR mode now. This means that we need to pass some more
register information down and enable papr mode. We also need to align
the HTAB to htab_size boundary.
Using this patch, -M pseries works with kvm even on non-hv kvm
implementations, as long as the preceding kernel patches are in.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
v1 -> v2:
- match on CONFIG_PSERIES
v2 -> v3:
- remove HIOR pieces from PAPR patch (ABI breakage)
We have a bunch of helper functions that don't have any stubs for them in case
we don't have CONFIG_KVM enabled. That didn't bite us so far, because gcc can
optimize them out pretty well, but we should really provide them.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
v1 -> v2:
- use uint64_t for clockfreq
We need to find out the host's clock-frequency when running on KVM, so
let's export a respective function.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
v1 -> v2:
- enable 64bit values