For easier handling of future processors using the POWER9 or something
close to it, add a new bit in the MMU model. This was originally from a
revised version of 86cf1e9 "target/ppc/POWER9: Add ISAv3.00 MMU definition"
but the older version of the patch was already merged. This makes the
change on top of the original version.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
getrampagesize() returns the largest supported page size and mainly
used to know if huge pages are enabled.
However is implemented in target-ppc/kvm.c and not available
in TCG or other architectures.
This renames and moves gethugepagesize() to mmap-alloc.c where
fd-based analog of it is already implemented. This renames and moves
getrampagesize() to exec.c as it seems to be the common place for
helpers like this.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
compat_table contains the list of logical pvr compat modes which a cpu can
operate in. It is a list of struct CompatInfo which contains the given pvr
value for a compat mode, the pcr bits which should be set to operate in
that compat mode, the pcr level which must be present in pcr_supported for
a processor to support that compat mode and the max threads possible in
that compat mode.
Add an entry for the POWER9/ISAv3.00 logical pvr which represents a
processor running with support for logical pvr 0x0f000005. A processor
running in this mode should have PCR_COMPAT_3_00 set in the pcr (if
available in pcr_mask) and should have PCR_COMPAT_3_00 in pcr_supported
to indicate that it is capable of running in this compat mode.
Also add PCR_COMPAT_3_00 to the bits which must be set for all previous
compat modes. Since no processor models contain this bit yet in pcr_mask
it will never be set, but this ensures we don't forget to in the future.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This patch extends support for the `dump-guest-memory` command to the
32-bit PowerPC architecture. It relies on the assumption that a 64-bit
guest will not dump a 32-bit core file (and vice versa).
[dwg: I suspect this patch won't cover all cases, in particular a
32-bit machine type on a 64-bit qemu build. However, it does strictly
more than what we had before, so might as well apply as a starting
point]
Signed-off-by: Mike Nawrocki <michael.nawrocki@gtri.gatech.edu>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
mcrxrx: Move to CR from XER Extended
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Add helper_div_compute_ov() in the int_helper for updating the overflow
flags.
For Divide Word:
SO, OV, and OV32 bits reflects overflow of the 32-bit result
For Divide DoubleWord:
SO, OV, and OV32 bits reflects overflow of the 64-bit result
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
For Multiply Word:
SO, OV, and OV32 bits reflects overflow of the 32-bit result
For Multiply DoubleWord:
SO, OV, and OV32 bits reflects overflow of the 64-bit result
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* SO and OV reflects overflow of the 64-bit result in 64-bit mode and
overflow of the low-order 32-bit result in 32-bit mode
* OV32 reflects overflow of the low-order 32-bit independent of the mode
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Adds routine to compute ca32 - gen_op_arith_compute_ca32
For 64-bit mode use the compute ca32 routine. While for 32-bit mode, CA
and CA32 will have same value.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
POWER ISA 3.0 adds CA32 and OV32 status in 64-bit mode. Add the flags
and corresponding defines.
Moreover, CA32 is updated when CA is updated and OV32 is updated when OV
is updated.
Arithmetic instructions:
* Addition and Substractions:
addic, addic., subfic, addc, subfc, adde, subfe, addme, subfme,
addze, and subfze always updates CA and CA32.
=> CA reflects the carry out of bit 0 in 64-bit mode and out of
bit 32 in 32-bit mode.
=> CA32 reflects the carry out of bit 32 independent of the
mode.
=> SO and OV reflects overflow of the 64-bit result in 64-bit
mode and overflow of the low-order 32-bit result in 32-bit
mode
=> OV32 reflects overflow of the low-order 32-bit independent of
the mode
* Multiply Low and Divide:
For mulld, divd, divde, divdu and divdeu: SO, OV, and OV32 bits
reflects overflow of the 64-bit result
For mullw, divw, divwe, divwu and divweu: SO, OV, and OV32 bits
reflects overflow of the 32-bit result
* Negate with OE=1 (nego)
For 64-bit mode if the register RA contains
0x8000_0000_0000_0000, OV and OV32 are set to 1.
For 32-bit mode if the register RA contains 0x8000_0000, OV and
OV32 are set to 1.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
SDR_64_HTABORG, which indicates the bits of the SDR1 register to use for
the base of a 64-bit machine's hashed page table (HPT) isn't correct. It
includes the top 46 bits of the register, but in fact the top 4 bits must
be zero (according to the ISA v2.07). No actual implementation has
supported close to 2^60 bytes of physical address space, so it's kind of
irrelevant, but we might as well correct this.
In addition, although we checked for bad size values in SDR1, we never
reported an error if entirely invalid bits were set there. Add this check
to ppc_store_sdr1().
Reported-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The function ppc_hash64_set_sdr1 basically checked the htabsize and set an
error if it was too big, otherwise it just stored the value in SPR_SDR1.
Given that the only function which calls ppc_hash64_set_sdr1() is
ppc_store_sdr1(), why not handle the checking in ppc_store_sdr1() to avoid
the extra function call. Note that ppc_store_sdr1() already stores the
value in SPR_SDR1 anyway, so we were doing it twice.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
[dwg: Remove unnecessary error temporary]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The pseries machine type implements the behaviour of a PAPR compliant
hypervisor, without actually executing such a hypervisor on the virtual
CPU. To do this we need some hooks in the CPU code to make hypervisor
facilities get redirected to the machine instead of emulated internally.
For hypercalls this is managed through the cpu->vhyp field, which points
to a QOM interface with a method implementing the hypercall.
For the hashed page table (HPT) - also a hypervisor resource - we use an
older hack. CPUPPCState has an 'external_htab' field which when non-NULL
indicates that the HPT is stored in qemu memory, rather than within the
guest's address space.
For consistency - and to make some future extensions easier - this merges
the external HPT mechanism into the vhyp mechanism. Methods are added
to vhyp for the basic operations the core hash MMU code needs: map_hptes()
and unmap_hptes() for reading the HPT, store_hpte() for updating it and
hpt_mask() to retrieve its size.
To match this, the pseries machine now sets these vhyp fields in its
existing vhyp class, rather than reaching into the cpu object to set the
external_htab field.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
CPUPPCState includes fields htab_base and htab_mask which store the base
address (GPA) and size (as a mask) of the guest's hashed page table (HPT).
These are set when the SDR1 register is updated.
Keeping these in sync with the SDR1 is actually a little bit fiddly, and
probably not useful for performance, since keeping them expands the size of
CPUPPCState. It also makes some upcoming changes harder to implement.
This patch removes these fields, in favour of calculating them directly
from the SDR1 contents when necessary.
This does make a change to the behaviour of attempting to write a bad value
(invalid HPT size) to the SDR1 with an mtspr instruction. Previously, the
bad value would be stored in SDR1 and could be retrieved with a later
mfspr, but the HPT size as used by the softmmu would be, clamped to the
allowed values. Now, writing a bad value is treated as a no-op. An error
message is printed in both new and old versions.
I'm not sure which behaviour, if either, matches real hardware. I don't
think it matters that much, since it's pretty clear that if an OS writes
a bad value to SDR1, it's not going to boot.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Accesses to the hashed page table (HPT) are complicated by the fact that
the HPT could be in one of three places:
1) Within guest memory - when we're emulating a full guest CPU at the
hardware level (e.g. powernv, mac99, g3beige)
2) Within qemu, but outside guest memory - when we're emulating user and
supervisor instructions within TCG, but instead of emulating
the CPU's hypervisor mode, we just emulate a hypervisor's behaviour
(pseries in TCG or KVM-PR)
3) Within the host kernel - a pseries machine using KVM-HV
acceleration. Mostly accesses to the HPT are handled by KVM,
but there are a few cases where qemu needs to access it via a
special fd for the purpose.
In order to batch accesses to the fd in case (3), we use a somewhat awkward
ppc_hash64_start_access() / ppc_hash64_stop_access() pair, which for case
(3) reads / releases several HPTEs from the kernel as a batch (usually a
whole PTEG). For cases (1) & (2) it just returns an address value. The
actual HPTE load helpers then need to interpret the returned token
differently in the 3 cases.
This patch keeps the same basic structure, but simplfiies the details.
First start_access() / stop_access() are renamed to map_hptes() and
unmap_hptes() to make their operation more obvious. Second, map_hptes()
now always returns a qemu pointer, which can always be used in the same way
by the load_hpte() helpers. In case (1) it comes from address_space_map()
in case (2) directly from qemu's HPT buffer and in case (3) from a
temporary buffer read from the KVM fd.
While we're at it, make things a bit more consistent in terms of types and
variable names: avoid variables named 'index' (it shadows index(3) which
can lead to confusing results), use 'hwaddr ptex' for HPTE indices and
uint64_t for each of the HPTE words, use ptex throughout the call stack
instead of pte_offset in some places (we still need that at the bottom
layer, but nowhere else).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
At present the SDR1 register - the base of the system's hashed page table
(HPT) - is represented as an SPR with supervisor read and write permission.
However, on CPUs which have a hypervisor mode, the SDR1 is a hypervisor
only resource. Change the permission checking on the SPR to reflect this.
Now that this is done, we don't need to check for an external HPT executing
mtsdr1: an external HPT only applies when we're emulating the behaviour of
a hypervisor, rather than modelling the CPU's hypervisor mode internally,
so if we're permitted to execute mtsdr1, we don't have an external HPT.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
cpu_ppc_set_papr() sets up various aspects of CPU state for use with PAPR
paravirtualized guests. However, it doesn't set the virtual hypervisor,
so callers must also call cpu_ppc_set_vhyp() so that PAPR hypercalls are
handled properly. This is a bit silly, so fold setting the virtual
hypervisor into cpu_ppc_set_papr().
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
When a 'pseries' guest is running with KVM-HV, the guest's hashed page
table (HPT) is stored within the host kernel, so it is not directly
accessible to qemu. Most of the time, qemu doesn't need to access it:
we're using the hardware MMU, and KVM itself implements the guest
hypercalls for manipulating the HPT.
However, qemu does need access to the in-KVM HPT to implement
get_phys_page_debug() for the benefit of the gdbstub, and maybe for
other debug operations.
To allow this, 7c43bca "target-ppc: Fix page table lookup with kvm
enabled" added kvmppc_hash64_read_pteg() to target/ppc/kvm.c to read
in a batch of HPTEs from the KVM table. Unfortunately, there are a
couple of problems with this:
First, the name of the function implies it always reads a whole PTEG
from the HPT, but in fact in some cases it's used to grab individual
HPTEs (which ends up pulling 8 HPTEs, not aligned to a PTEG from the
kernel).
Second, and more importantly, the code to read the HPTEs from KVM is
simply wrong, in general. The data from the fd that KVM provides is
designed mostly for compact migration rather than this sort of one-off
access, and so needs some decoding for this purpose. The current code
will work in some cases, but if there are invalid HPTEs then it will
not get sane results.
This patch rewrite the HPTE reading function to have a simpler
interface (just read n HPTEs into a caller provided buffer), and to
correctly decode the stream from the kernel.
For consistency we also clean up the similar function for altering
HPTEs within KVM (introduced in c138593 "target-ppc: Update
ppc_hash64_store_hpte to support updating in-kernel htab").
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Removes duplicate code and will be useful for consolidating flags
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This pull request has:
* Yet more POWER9 instruction implementations
* Some extensions to the softfloat code which are necesssary for
some of those instructions
* Some preliminary patches in preparation for POWER9 softmmu
implementation
* Igor Mammedov's cleanups to unify hotplug cpu handling across
architectures
* Assorted bugfixes
The softfloat and cpu hotplug changes aren't entirely ppc specific (in
fact the hotplug stuff contains some pc specific patches). However
they're included here because ppc is one of the main beneficiaries,
and the series depend on some ppc specific patches.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABCAAGBQJYrS/bAAoJEGw4ysog2bOSJ3MP/AjoTGTP5MHPwWBZKAxpEtie
vEXudVbOelr3QV06vHMH4YHVAncuzt9Hmz/RDgs5Uynp4vLdmEo5IdiFP9PMjrFg
oMAndku9icU8PG+XNF5pNrKy10n6k8dVRBR/19UxnRWMuxywOZO208WkICF/6kDK
IpFT96MubqbReLcVhdl2N8d2rP7/lRQmz6aPxhRLFBuAe8iheAQLq/QeZLIZaWEJ
i4mPWVu/CDYP9nMAgv56MW0yY5p2o5MCh+f80+7jvKXZBoeo83KOTaZeZbGb/byr
rCfyLTR24tj6WUGRvzyB+FJ8rbWKcox4UCx17239gAjXtLxhlYaQDo28S5gwinpQ
b/CaEgb8x2kl97tZT/M1mamr7PdFxachCA20oizguwFJ9oeukAPUvkVBpEtVYK8K
a+VrRHxVJwSi/ZD3N6WRZMXR4D+Oc8DcXoEzMu4CFtIzQ/WJroZCa4JCcdv4N1nw
9u1m+C2QbQ9sGBtTSGCy0KZyT3sZHoFT6aD4zpkV7s3BJKk+AXSLRpL4z8FP2sDB
Wh/Qk5q06P1pPZzvuU9QJmrpIE9EFcOQW4IQhyViut+BXzBlp7cWxeGcPM5PuJ7V
6FcMSchZeVOiLi9Y51csluDrecTKIQ3yFEgLW7j50Lg/WqmdwlwkcW39MzlWgjgQ
OIoVgvGmGovPTGIIYyY9
=bsJJ
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-2.9-20170222' into staging
ppc patch queue for 2017-02-22
This pull request has:
* Yet more POWER9 instruction implementations
* Some extensions to the softfloat code which are necesssary for
some of those instructions
* Some preliminary patches in preparation for POWER9 softmmu
implementation
* Igor Mammedov's cleanups to unify hotplug cpu handling across
architectures
* Assorted bugfixes
The softfloat and cpu hotplug changes aren't entirely ppc specific (in
fact the hotplug stuff contains some pc specific patches). However
they're included here because ppc is one of the main beneficiaries,
and the series depend on some ppc specific patches.
# gpg: Signature made Wed 22 Feb 2017 06:29:47 GMT
# gpg: using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-2.9-20170222: (43 commits)
hw/ppc/ppc405_uc.c: Avoid integer overflows
hw/ppc/spapr: Check for valid page size when hot plugging memory
target-ppc: fix Book-E TLB matching
hw/net/spapr_llan: 6 byte mac address device tree entry
machine: replace query_hotpluggable_cpus() callback with has_hotpluggable_cpus flag
machine: unify [pc_|spapr_]query_hotpluggable_cpus() callbacks
spapr: reuse machine->possible_cpus instead of cores[]
change CPUArchId.cpu type to Object*
pc: pass apic_id to pc_find_cpu_slot() directly so lookup could be done without CPU object
pc: calculate topology only once when possible_cpus is initialised
pc: move pcms->possible_cpus init out of pc_cpus_init()
machine: move possible_cpus to MachineState
hw/pci-host/prep: Do not use hw_error() in realize function
target/ppc/POWER9: Direct all instr and data storage interrupts to the hypv
target/ppc/POWER9: Adapt LPCR handling for POWER9
target/ppc/POWER9: Add ISAv3.00 MMU definition
target/ppc: Fix LPCR DPFD mask define
target-ppc: Add xscvqpudz and xscvqpuwz instructions
target-ppc: Implement round to odd variants of quad FP instructions
softfloat: Add float128_to_uint32_round_to_zero()
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
On POWER, the valid page sizes that the guest can use are bound
to the CPU and not to the memory region. QEMU already has some
fancy logic to find out the right maximum memory size to tell
it to the guest during boot (see getrampagesize() in the file
target/ppc/kvm.c for more information).
However, once we're booted and the guest is using huge pages
already, it is currently still possible to hot-plug memory regions
that does not support huge pages - which of course does not work
on POWER, since the guest thinks that it is possible to use huge
pages everywhere. The KVM_RUN ioctl will then abort with -EFAULT,
QEMU spills out a not very helpful error message together with
a register dump and the user is annoyed that the VM unexpectedly
died.
To avoid this situation, we should check the page size of hot-plugged
DIMMs to see whether it is possible to use it in the current VM.
If it does not fit, we can print out a better error message and
refuse to add it, so that the VM does not die unexpectely and the
user has a second chance to plug a DIMM with a matching memory
backend instead.
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1419466
Signed-off-by: Thomas Huth <thuth@redhat.com>
[dwg: Fix a build error on 32-bit builds with KVM]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The Book-E TLB matching process should bail out early when a TLB
entry matches, but the access permissions are wrong. The CPU
will then raise a DSI error instead of a Data TLB error, as
described for TLB matching in Freescale and IBM documents.
Signed-off-by: Alex Zuepke <azu@sysgo.de>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The vpm0 bit was removed from the LPCR in POWER9, this bit controlled
whether ISI and DSI interrupts were directed to the hypervisor or the
partition. These interrupts now go to the hypervisor irrespective, thus
it is no longer necessary to check the vmp0 bit in the LPCR.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The logical partitioning control register controls a threads operation
based on the partition it is currently executing. Add new definitions and
update the mask used when writing to the LPCR based on the POWER9 spec.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
POWER9 processors implement the mmu as defined in version 3.00 of the ISA.
Add a definition for this mmu model and set the POWER9 cpu model to use
this mmu model.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The DPFD field in the LPCR is 3 bits wide. This has always been defined
as 0x3 << shift which indicates a 2 bit field, which is incorrect.
Correct this.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xscvqpudz: VSX Scalar truncate & Convert Quad-Precision format to
Unsigned Doubleword format
xscvqpuwz: VSX Scalar truncate & Convert Quad-Precision format to
Unsigned Word format
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xsaddqpo: VSX Scalar Add Quad-Precision using round to Odd
xsmulqo: VSX Scalar Multiply Quad-Precision using round to Odd
xsdivqpo: VSX Scalar Divide Quad-Precision using round to Odd
xscvqpdpo: VSX Scalar round & Convert Quad-Precision format to
Double-Precision format using round to Odd
xssqrtqpo: VSX Scalar Square Root Quad-Precision using round to Odd
xssubqpo: VSX Scalar Subtract Quad-Precision using round to Odd
In addition, fix the invalid bitmask in the instruction encoding
of xssqrtqp[o].
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
CC: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Use the available wait instruction implementation.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
slbsync: SLB Synchoronize
The instruction provides an ordering function for the effects of all
slbieg instructions executed by the thread executing the slbsync
instruction.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
slbieg: SLB Invalidate Entry Global
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
stwat: Store Word Atomic
stdat: Store Doubleword Atomic
The instruction includes as function code (5 bits) which gives a detail
on the operation to be performed. The patch implements five such
functions.
Signed-off-by: Balamuruhan S <bala24@linux.vnet.ibm.com>
Signed-off-by: Harish S <harisrir@linux.vnet.ibm.com>
Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
[ implement stdat, use macro and combine both implementation ]
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
lwat: Load Word Atomic
ldat: Load Doubleword Atomic
The instruction includes as function code (5 bits) which gives a detail
on the operation to be performed. The patch implements five such
functions.
Signed-off-by: Balamuruhan S <bala24@linux.vnet.ibm.com>
Signed-off-by: Harish S <harisrir@linux.vnet.ibm.com>
Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
[ combine both lwat/ldat implementation using macro ]
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
When running certain HMP commands ("info registers", "info cpustats",
"info tlb", "nmi", "memsave" or dumping virtual memory) with the "none"
machine, QEMU crashes with a segmentation fault. This happens because the
"none" machine does not have any CPUs by default, but these HMP commands
did not check for a valid CPU pointer yet. Add such checks now, so we get
an error message about the missing CPU instead.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1484309555-1935-1-git-send-email-thuth@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
When running with KVM on POWER, we are registering a "family" CPU
type for the host CPU that we are running on. For example, on all
POWER8-compatible hosts, we register a "POWER8" CPU type, so that
you can always start QEMU with "-cpu POWER8" there, without the
need to know whether you are running on a POWER8, POWER8E or POWER8NVL
host machine.
However, we also have a "POWER8" CPU alias in the ppc_cpu_aliases list
(that is mainly useful for TCG). This leads to two cosmetical drawbacks:
If the user runs QEMU with "-cpu ?", we always claim that POWER8 is an
"alias for POWER8_v2.0" - which is simply not true when running with
KVM on POWER. And when using the 'query-cpu-definitions' QMP call,
there are currently two entries for "POWER8", one for the alias, and
one for the additional registered type.
To solve the two problems, we should rather update the "family" alias
instead of registering a new types. We then only have one "POWER8"
CPU definition around, an alias, which also points to the right
destination.
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1396536
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We are calculating the authority mask register key value wrong.
The pte entry contains the key value with the two upper bits and the three
lower bits stored separately. We should use these two portions to get a 5
bit value, not or them together which will only give us a 3 bit value.
Fix this.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We were printing an unsigned value as a signed value, fix this.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The cp_abort instruction is used to remove the state of an in progress
copy paste sequence. POWER9 compilers add this in various places, such
as context switches which causes illegal instruction signals since we
don't yet implement this instruction.
Given there is no implementation of the copy paste facility and that we
don't claim to support it, we can just noop this instruction.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
It can be useful when debugging to print the LPCR value.
Thus we add the LPCR to the "info registers" output if the register had
been defined.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xststdcsp: VSX Scalar Test Data Class Single-Precision
xststdcdp: VSX Scalar Test Data Class Double-Precision
xststdcqp: VSX Scalar Test Data Class Quad-Precision
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xvtstdcsp: VSX Vector Test Data Class Single-Precision
xvtstdcdp: VSX Vector Test Data Class Double-Precision
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
There is no CPU model called "7447_v1.2" in our list, so the
"7447" alias should point to "7447_v1.1" instead. Let's also
remove the "codename" aliases that point to non-implemented
CPU models - they are really of no use here.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We do not support POWER1 CPUs in QEMU, so it does not make sense
to keep this stub around.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This is a port to ppc of the i386 commit:
00f4d64 kvmclock: clock should count only if vm is running
We remove timebase_post_load function, and use the VM state
change handler to save and restore the guest_timebase (on stop
and continue).
We keep timebase_pre_save to reduce the clock difference on
migration like in:
6053a86 kvmclock: reduce kvmclock difference on migration
Time base offset has originally been introduced by commit
98a8b52 spapr: Add support for time base offset migration
So while VM is paused, the time is stopped. This allows to have
the same result with date (based on Time Base Register) and
hwclock (based on "get-time-of-day" RTAS call).
Moreover in TCG mode, the Time Base is always paused, so this
patch also adjust the behavior between TCG and KVM.
VM state field "time_of_the_day_ns" is now useless but we keep
it to be able to migrate to older version of the machine.
As vmstate_ppc_timebase structure (with timebase_pre_save() and
timebase_post_load() functions) was only used by vmstate_spapr,
we register the VM state change handler only in ppc_spapr_init().
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
pcr_supported is used to define the supported PCR values for a given
processor. A POWER9 processor can support 3.00, 2.07, 2.06 and 2.05
compatibility modes, thus we set this accordingly.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This logical PVR value now corresponds to ISA version 3.00 so rename it
accordingly.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xvcvhpsp: VSX Vector Convert Half Precision to Single Precision
xvcvsphp: VSX Vector Convert Single Precision to Half Precision
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xscvsdqp: VSX Scalar Convert Signed Doubleword format to
Quad-Precision format
xscvudqp: VSX Scalar Convert Unsigned Doubleword format to
Quad-Precision format
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xscmpoqp, xscmpuqp & xscmpexpqp were added before f128 field was
introduced in ppc_vsr_t. Now that we have it, use it instead of
generating the 128 bit float using two 64bit fields.
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
bcdutrunc. Decimal unsigned truncate. Works like bcdtrunc. with
unsigned BCD numbers.
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
bcdtrunc.: Decimal integer truncate. Given a BCD number in vrb and the
number of bytes to truncate in vra, the return register will have vrb
with such bits truncated.
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xscvqpsdz: VSX Scalar truncate & Convert Quad-Precision format to
Signed Doubleword format
xscvqpswz: VSX Scalar truncate & Convert Quad-Precision format to
Signed Word format
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
bcdsr.: Decimal shift and round. This instruction works like bcds.
however, when performing right shift, 1 will be added to the
result if the last digit was >= 5.
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
bcdus.: Decimal unsigned shift. This instruction works like bcds. but
considers only unsigned BCDs (no sign in least meaning 4 bits).
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
bcds.: Decimal shift. Given two registers vra and vrb, this instruction
shift the vrb value by vra bits into the result register.
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This commit fixes a warning in the code "(i * 2) ? .. : ..", which
should be better as "i ? .. : ..", and improves the BCD_DIG_BYTE
macro by placing parentheses around its argument to avoid possible
expansion issues like: BCD_DIG_BYTE(i + j).
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xscvqpdp: VSX Scalar round & Convert Quad-Precision format to
Double-Precision format
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xscvdpqp: VSX Scalar Convert Double-Precision format to
Quad-Precision format
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Once a compatiblity mode is negotiated with the guest,
h_client_architecture_support() uses run_on_cpu() to update each CPU to
the new mode. We're going to want this logic somewhere else shortly,
so make a helper function to do this global update.
We put it in target-ppc/compat.c - it makes as much sense at the CPU level
as it does at the machine level. We also move the cpu_synchronize_state()
into ppc_set_compat(), since it doesn't really make any sense to call that
without synchronizing state.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Use correct FP precision when setting FPRF in FP conversion helpers
instead of always assuming float64 precision.
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xscvdphp: VSX Scalar round & Convert Double-Precision format to
Half-Precision format
xscvhpdp: VSX Scalar Convert Half-Precision format to
Double-Precision format
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Since helper_compute_fprf() works on float64 argument, rename it
to helper_compute_fprf_float64(). Also use a macro to generate
helper_compute_fprf_float64() so that float128 version of the same
helper can be introduced easily later.
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Replace isden() by float64_is_zero_or_denormal() so that code in
helper_compute_fprf() can be reused to work with float128 argument.
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Use float64 argument instead of unit64_t in helper_compute_fprf()
This allows code in helper_compute_fprf() to be reused later to
work with float128 argument too.
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xxinsertw: VSX Vector Insert Word
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xxextractuw: VSX Vector Extract Unsigned Word
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Current ppc_set_compat() will attempt to set any compatiblity mode
specified, regardless of whether it's available on the CPU. The caller is
expected to make sure it is setting a possible mode, which is awkwward
because most of the information to make that decision is at the CPU level.
This begins to clean this up by introducing a ppc_check_compat() function
which will determine if a given compatiblity mode is supported on a CPU
(and also whether it lies within specified minimum and maximum compat
levels, which will be useful later). It also contains an assertion that
the CPU has a "virtual hypervisor"[1], that is, that the guest isn't
permitted to execute hypervisor privilege code. Without that, the guest
would own the PCR and so could override any mode set here. Only machine
types which use a virtual hypervisor (i.e. 'pseries') should use
ppc_check_compat().
ppc_set_compat() is modified to validate the compatibility mode it is given
and fail if it's not available on this CPU.
[1] Or user-only mode, which also obviously doesn't allow access to the
hypervisor privileged PCR. We don't use that now, but could in future.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
To continue consolidation of compatibility mode information, this rewrites
the ppc_get_compat_smt_threads() function using the table of compatiblity
modes in target-ppc/compat.c.
It's not a direct replacement, the new ppc_compat_max_threads() function
has simpler semantics - it just returns the number of threads the cpu
model has, taking into account any compatiblity mode it is in.
This no longer takes into account kvmppc_smt_threads() as the previous
version did. That check wasn't useful because we check in
ppc_cpu_realizefn() that CPUs aren't instantiated with more threads
than kvm allows (or if we didn't things will already be broken and
this won't make it any worse).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
This rewrites the ppc_set_compat() function so that instead of open coding
the various compatibility modes, it reads the relevant data from a table.
This is a first step in consolidating the information on compatibility
modes scattered across the code into a single place.
It also makes one change to the logic. The old code masked the bits
to be set in the PCR (Processor Compatibility Register) by which bits
are valid on the host CPU. This made no sense, since it was done
regardless of whether our guest CPU was the same as the host CPU or
not. Furthermore, the actual PCR bits are only relevant for TCG[1] -
KVM instead uses the compatibility mode we tell it in
kvmppc_set_compat(). When using TCG host cpu information usually
isn't even present.
While we're at it, we put the new implementation in a new file to make the
enormous translate_init.c a little smaller.
[1] Actually it doesn't even do anything in TCG, but it will if / when we
get to implementing compatibility mode logic at that level.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
stxvll: Store VSX Vector Left-justified with Length
Vector (8-bit elements) in BE/LE:
+---+---+---+---+---+---+---+---+---+---+---+---+---+---+--+--+
|“T”|“h”|“i”|“s”|“ ”|“i”|“s”|“ ”|“a”|“ ”|“T”|“E”|“S”|“T”|00|00|
+---+---+---+---+---+---+---+---+---+---+---+---+---+---+--+--+
Storing 14 bytes would result in following Little/Big-endian Storage:
+---+---+---+---+---+---+---+---+---+---+---+---+---+---+--+--+
|“T”|“h”|“i”|“s”|“ ”|“i”|“s”|“ ”|“a”|“ ”|“T”|“E”|“S”|“T”|FF|FF|
+---+---+---+---+---+---+---+---+---+---+---+---+---+---+--+--+
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
A function to check if all digits of a given BCD number is valid is
here presented because more instructions will need to reuse the
same code.
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The structure and corresponding defines and functions need to be used
outside of fpu_helper.c as well.
Add u8, u16, u32 and Int128 to the structure.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The 'cpu_version' field in PowerPCCPU is badly named. It's named after the
'cpu-version' device tree property where it is advertised, but that meaning
may not be obvious in most places it appears.
Worse, it doesn't even really correspond to that device tree property. The
property contains either the processor's PVR, or, if the CPU is running in
a compatibility mode, a special "logical PVR" representing which mode.
Rename the cpu_version field, and a number of related variables to
compat_pvr to make this clearer.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Thomas Huth <thuth@redhat.com>
The pseries machine type is a bit unusual in that it runs a paravirtualized
guest. The guest expects to interact with a hypervisor, and qemu
emulates the functions of that hypervisor directly, rather than executing
hypervisor code within the emulated system.
To implement this in TCG, we need to intercept hypercall instructions and
direct them to the machine's hypercall handlers, rather than attempting to
perform a privilege change within TCG. This is controlled by a global
hook - cpu_ppc_hypercall.
This cleanup makes the handling a little cleaner and more extensible than
a single global variable. Instead, each CPU to have hypercalls intercepted
has a pointer set to a QOM object implementing a new virtual hypervisor
interface. A method in that interface is called by TCG when it sees a
hypercall instruction. It's possible we may want to add other methods in
future.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
bcdsetsgn.: Decimal set sign. This instruction copies the register
value to the result register but adjust the signal according to
the preferred sign value.
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
bcdcpsgn.: Decimal copy sign. Given two registers vra and vrb, it
copies the vra value with vrb sign to the result register vrt.
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
bcdctsq.: Decimal convert to signed quadword. It is possible to
convert packed decimal values to signed quadwords.
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
bcdcfsq.: Decimal convert from signed quadword. It is not possible
to convert values less than -10^31-1 or greater than 10^31-1 to be
represented in packed decimal format.
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
[dwg: Corrected constant which should be 10^16-1 but was 10^17-1]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
stxsd: Store VSX Scalar Dword
stxssp: Store VSX Scalar SP
Moreover, DQ-Form/DS-FORM instructions shares the same primary
opcode(0x3D). For DQ-FORM bits 29:31 are used, for DS-FORM bits 30:31
are used. Common routine to decode primary opcode(0x3D) -
ds-form/dq-form instructions is required.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
lxsd: Load VSX Scalar Dword
lxssp: Load VSX Scalar Single
Moreover, DS-Form instructions shares the same primary opcode, bits
30:31 are used to decode the instruction. Use a common routine to decode
primary opcode(0x39) - ds-form instructions and branch-out depending on
bits 30:31.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
- xscmpodp & xscmpudp are missing flags reset.
- In xscmpodp, VXCC should be set only if VE is 0 for signalling NaN case
and VXCC should be set by explicitly checking for quiet NaN case.
- Comparison is being done only if the operands are not NaNs. However as
per ISA, it should be done even when operands are NaNs.
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Add _BIT to CRF_[GT,LT,EQ_SO] and introduce CRF_[GT,LT,EQ,SO] for usage
without shifts in the code. This would simplify the code.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Move instruction decode helpers to target-ppc/internal.h so that some
of these can be used from outside of translate.c. This movement also
helps to get rid of some duplicate helpers from target-ppc/fpu_helper.c.
Suggested-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Current migration code cannot handle some data structures such as
QTAILQ in qemu/queue.h. Here we extend the signatures of put/get
in VMStateInfo so that customized handling is supported. put now
will return int type.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Jianjun Duan <duanj@linux.vnet.ibm.com>
Message-Id: <1484852453-12728-2-git-send-email-duanj@linux.vnet.ibm.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Move the generic cpu_synchronize_ functions to the common hw_accel.h header,
in order to prepare for the addition of a second hardware accelerator.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Vincent Palatin <vpalatin@chromium.org>
Message-Id: <f5c3cffe8d520011df1c2e5437bb814989b48332.1484045952.git.vpalatin@chromium.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
These are not needed since linux-headers/ provides up-to-date definitions.
The constants are in linux-headers/asm-powerpc/kvm.h.
The sole users, hw/intc/xics_kvm.c and target/ppc/kvm.c, include asm/kvm.h
via sysemu/kvm.h->linux/kvm.h.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJYeOOmAAoJEPvQ2wlanipE3ZUH/Rsfpl23kXCMmqoXEIhWXy+h
yf8ARWCmpU6UKfwb+sH4vLegBfU56f62vVkGQ6oaaAbuyQ4SxCUlZGMO/rqY8/TE
m57aM+VfEE+bIdinAtLjFM24EVp/exMfkeutK7ItzLv7GwlrBos0J5veyCuyJ15q
pccV24jrpbJGilEeJ2GblKp3r2I3dInQGauOQhtoP3MNjHmYNSQD7noSbdN/JiTR
9H2eV700pg3ZPaSfO+CTVQN+cHjK1FC6qLi6916YZY9llnSOnDAegBYgbwE1RIBw
AULpWrezYveKy71eFhHVtGxnPeCJ8J4GVECMK0P0cdxzprIXFh1kZezyM4bxAGk=
=sboI
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/stsquad/tags/pull-tcg-common-tlb-reset-20170113-r1' into staging
This is the same as the v3 posted except a re-base and a few extra signoffs
# gpg: Signature made Fri 13 Jan 2017 14:26:46 GMT
# gpg: using RSA key 0xFBD0DB095A9E2A44
# gpg: Good signature from "Alex Bennée (Master Work Key) <alex.bennee@linaro.org>"
# Primary key fingerprint: 6685 AE99 E751 67BC AFC8 DF35 FBD0 DB09 5A9E 2A44
* remotes/stsquad/tags/pull-tcg-common-tlb-reset-20170113-r1:
cputlb: drop flush_global flag from tlb_flush
cpu_common_reset: wrap TCG specific code in tcg_enabled()
qom/cpu: move tlb_flush to cpu_common_reset
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We have never has the concept of global TLB entries which would avoid
the flush so we never actually use this flag. Drop it and make clear
that tlb_flush is the sledge-hammer it has always been.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
[DG: ppc portions]
Acked-by: David Gibson <david@gibson.dropbear.id.au>
It is a common thing amongst the various cpu reset functions want to
flush the SoftMMU's TLB entries. This is done either by calling
tlb_flush directly or by way of a general memset of the CPU
structure (sometimes both).
This moves the tlb_flush call to the common reset function and
additionally ensures it is only done for the CONFIG_SOFTMMU case and
when tcg is enabled.
In some target cases we add an empty end_of_reset_fields structure to the
target vCPU structure so have a clear end point for any memset which
is resetting value in the structure before CPU_COMMON (where the TLB
structures are).
While this is a nice clean-up in general it is also a precursor for
changes coming to cputlb for MTTCG where the clearing of entries
can't be done arbitrarily across vCPUs. Currently the cpu_reset
function is usually called from the context of another vCPU as the
architectural power up sequence is run. By using the cputlb API
functions we can ensure the right behaviour in the future.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
The new typename attribute on query-cpu-definitions will be used
to help management software use device-list-properties to check
which properties can be set using -cpu or -global for the CPU
model.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <1479320499-29818-1-git-send-email-ehabkost@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Use the new primitives for RDWINM and RLDICL.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Richard Henderson <rth@twiddle.net>
We've currently got 18 architectures in QEMU, and thus 18 target-xxx
folders in the root folder of the QEMU source tree. More architectures
(e.g. RISC-V, AVR) are likely to be included soon, too, so the main
folder of the QEMU sources slowly gets quite overcrowded with the
target-xxx folders.
To disburden the main folder a little bit, let's move the target-xxx
folders into a dedicated target/ folder, so that target-xxx/ simply
becomes target/xxx/ instead.
Acked-by: Laurent Vivier <laurent@vivier.eu> [m68k part]
Acked-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de> [tricore part]
Acked-by: Michael Walle <michael@walle.cc> [lm32 part]
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com> [s390x part]
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> [s390x part]
Acked-by: Eduardo Habkost <ehabkost@redhat.com> [i386 part]
Acked-by: Artyom Tarasenko <atar4qemu@gmail.com> [sparc part]
Acked-by: Richard Henderson <rth@twiddle.net> [alpha part]
Acked-by: Max Filippov <jcmvbkbc@gmail.com> [xtensa part]
Reviewed-by: David Gibson <david@gibson.dropbear.id.au> [ppc part]
Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> [crisµblaze part]
Acked-by: Guan Xuetao <gxt@mprc.pku.edu.cn> [unicore32 part]
Signed-off-by: Thomas Huth <thuth@redhat.com>