Commit Graph

443 Commits

Author SHA1 Message Date
Anton Nefedov
e8ed97a647 qapi: flatten GuestPanicInformation union
Signed-off-by: Anton Nefedov <anton.nefedov@virtuozzo.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: Eric Blake <eblake@redhat.com>
Message-Id: <1487614915-18710-3-git-send-email-den@openvz.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-03-03 16:40:03 +01:00
Paolo Bonzini
2ae41db262 KVM: do not use sigtimedwait to catch SIGBUS
Call kvm_on_sigbus_vcpu asynchronously from the VCPU thread.
Information for the SIGBUS can be stored in thread-local variables
and processed later in kvm_cpu_exec.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-03-03 16:40:02 +01:00
Paolo Bonzini
4d39892cca KVM: remove kvm_arch_on_sigbus
Build it on kvm_arch_on_sigbus_vcpu instead.  They do the same
for "action optional" SIGBUSes, and the main thread should never get
"action required" SIGBUSes because it blocks the signal.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-03-03 16:40:02 +01:00
Paolo Bonzini
a16fc07ebd cpus: reorganize signal handling code
Move the KVM "eat signals" code under CONFIG_LINUX, in preparation
for moving it to kvm-all.c; reraise non-MCE SIGBUS immediately,
without passing it to KVM.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-03-03 16:40:02 +01:00
Paolo Bonzini
20e0ff59a9 KVM: x86: cleanup SIGBUS handlers
This patch should have no semantic change.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-03-03 16:40:02 +01:00
Nikunj A Dadhania
992d7e976c target/ppc: rewrite f[n]m[add,sub] using float64_muladd
Use the softfloat api for fused multiply-add.
Introduce routine to set the FPSCR flags VXNAN, VXIMZ nad VMISI.

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-03 11:38:33 +11:00
Sam Bobroff
ec975e839c spapr: Small cleanup of PPC MMU enums
The PPC MMU types are sometimes treated as if they were a bit field
and sometime as if they were an enum which causes maintenance
problems: flipping bits in the MMU type (which is done on both the 1TB
segment and 64K segment bits) currently produces new MMU type
values that are not handled in every "switch" on it, sometimes causing
an abort().

This patch provides some macros that can be used to filter out the
"bit field-like" bits so that the remainder of the value can be
switched on, like an enum. This allows removal of all of the
"degraded" types from the list and should ease maintenance.

Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-03 11:30:59 +11:00
Suraj Jitindar Singh
da82c73a95 target/ppc: Rework hash mmu page fault code and add defines for clarity
The hash mmu page fault handling code is responsible for generating ISIs
and DSIs when access permissions cause an access to fail. Part of this
involves setting the srr1 or dsisr registers to indicate what causes the
access to fail. Add defines for the bit fields of these registers and
rework the code to use these new defines in order to improve readability
and code clarity.

While we're here, update what is logged when an access fails to include
information as to what caused to access to fail for debug purposes.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
[dwg: Moved constants to cpu.h since they're not MMUv3 specific]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-03 11:30:59 +11:00
Suraj Jitindar Singh
07a68f9907 target/ppc: Move no-execute and guarded page checking into new function
A pte entry has bit fields which can be used to make a page no-execute or
guarded, if either of these bits are set then an instruction access to this
page will fail. Currently these bits are checked with the pp_prot function
however the ISA specifies that the access authority controlled by the
key-pp value pair should only be checked on an instruction access after
the no-execute and guard bits have already been verified to permit the
access.

Move the no-execute and guard bit checking into a new separate function.
Note that we can remove the check for the no-execute bit in the slb entry
since this check was already performed above when we obtained the slb
entry.

In the event that the no-execute or guard bits are set, an ISI should be
generated with the SRR1_NOEXEC_GUARD (0x10000000) bit set in srr1. Add a
define for this for clarity.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
[dwg: Move constants to cpu.h since they're not MMUv3 specific]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-03 11:30:59 +11:00
Suraj Jitindar Singh
347a5c73ba target/ppc: Add execute permission checking to access authority check
Basic storage protection defines various access authority permissions
based on a slb storage key and pte pp value pair. This access authority
defines read, write and execute permissions however currently we only
use this to control read and write permissions and ignore the execute
control.

Fix the code to allow execute permissions based on the key-pp value pair.
Execute is allowed under the same conditions which enable reads.
(i.e. read permission -> execute permission)

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-03 11:30:59 +11:00
Suraj Jitindar Singh
a6152b52bc target/ppc: Add Instruction Authority Mask Register Check
The instruction authority mask register (IAMR) can be used to restrict
permissions for instruction fetch accesses on a per key basis for each
of 32 different key values. Access permissions are derived based on the
specific key value stored in the relevant page table entry.

The IAMR was introduced in, and is present in processors since, POWER8
(ISA v2.07). Thus introduce a function to check access permissions based
on the pte key value and the contents of the IAMR when handling a page
fault to ensure sufficient access permissions for an instruction fetch.

A hash pte contains a key value in bits 2:3|52:54 of the second double word
of the pte, this key value gives an index into the IAMR which contains 32
2-bit access masks. If the least significant bit of the 2-bit access mask
corresponding to the given key value is set (IAMR[key] & 0x1 == 0x1) then
the instruction fetch is not permitted and an ISI is generated accordingly.
While we're here, add defines for the srr1 bits to be set for the ISI for
clarity.

e.g.

pte:
dw0 [XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX]
dw1 [XX01XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX010XXXXXXXXX]
       ^^                                                ^^^
key = 01010 (0x0a)

IAMR: [XXXXXXXXXXXXXXXXXXXX01XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX]
                           ^^
Access mask = 0b01

Test access mask: 0b01 & 0x1 == 0x1

Least significant bit of the access mask is set, thus the instruction fetch
is not permitted. We should generate an instruction storage interrupt (ISI)
with bit 42 of SRR1 set to indicate access precluded by virtual page class
key protection.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
[dwg: Move new constants to cpu.h, since they're not MMUv3 specific]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-03 11:30:59 +11:00
Suraj Jitindar Singh
6f46dcb3e5 target/ppc/POWER9: Add cpu_has_work function for POWER9
The cpu has work function is used to mask interrupts used to determine
if there is work for the cpu based on the LPCR. Add a function to do this
for POWER9 and add it to the POWER9 cpu definition. This is similar to that
for POWER8 except using the LPCR bits as defined for POWER9.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-03 11:30:59 +11:00
Suraj Jitindar Singh
b2899495e3 target/ppc/POWER9: Add POWER9 mmu fault handler
Add a new mmu fault handler for the POWER9 cpu and add it as the handler
for the POWER9 cpu definition.

This handler checks if the guest is radix or hash based on the value in the
partition table entry and calls the correct fault handler accordingly.

The hash fault handling code has also been updated to check if the
partition is using segment tables.

Currently only legacy hash (no segment tables) is supported.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-03 11:30:59 +11:00
Suraj Jitindar Singh
4f4f28ffc1 target/ppc: Don't gen an SDR1 on POWER9 and rework register creation
POWER9 doesn't have a storage description register 1 (SDR1) which is used
to store the base and size of the hash table. Thus we don't need to
generate this register on the POWER9 cpu model. While we're here, the
register generation code for 970, POWER5+, POWER<7/8/9> in general is a
mess where we call a generic function from a model specific function which
then attempts to call model specific functions, so rework this for
readability.

We update ppc_cpu_dump_state so that "info registers" will only display
the value of sdr1 if the register has been generated.

As mentioned above the register generation for the pcc->init_proc
function for 970, POWER5+, POWER7, POWER8 and POWER9 has been reworked
for improved clarity. Instead of calling init_proc_book3s_64 which then
attempts to generate the correct registers through a mess of if statements,
we remove this function and instead call the appropriate register
generation functions directly. This follows the register generation model
used for earlier cpu models (pre-970) whereby cpu specific registers are
generated directly in the init_proc function and makes it easier to
add/remove specific registers for new cpu models.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-03 11:30:59 +11:00
Suraj Jitindar Singh
9861bb3efd target/ppc: Add patb_entry to sPAPRMachineState
ISA v3.00 adds the idea of a partition table which is used to store the
address translation details for all partitions on the system. The partition
table consists of double word entries indexed by partition id where the second
double word contains the location of the process table in guest memory. The
process table is registered by the guest via a h-call.

We need somewhere to store the address of the process table so we add an entry
to the sPAPRMachineState struct called patb_entry to represent the second
doubleword of a single partition table entry corresponding to the current
guest. We need to store this value so we know if the guest is using radix or
hash translation and the location of the corresponding process table in guest
memory. Since we only have a single guest per qemu instance, we only need one
entry.

Since the partition table is technically a hypervisor resource we require that
access to it is abstracted by the virtual hypervisor through the get_patbe()
call. Currently the value of the entry is never set (and thus
defaults to 0 indicating hash), but it will be required to both implement
POWER9 kvm support and tcg radix support.

We also add this field to be migrated as part of the sPAPRMachineState as we
will need it on the receiving side as the guest will never tell us this
information again and we need it to perform translation.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-03 11:30:59 +11:00
David Gibson
0922f1e487 target/ppc/POWER9: Add POWERPC_MMU_V3 bit
For easier handling of future processors using the POWER9 or something
close to it, add a new bit in the MMU model.  This was originally from a
revised version of 86cf1e9 "target/ppc/POWER9: Add ISAv3.00 MMU definition"
but the older version of the patch was already merged.  This makes the
change on top of the original version.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-03 11:30:59 +11:00
Alexey Kardashevskiy
9c60766887 exec, kvm, target-ppc: Move getrampagesize() to common code
getrampagesize() returns the largest supported page size and mainly
used to know if huge pages are enabled.

However is implemented in target-ppc/kvm.c and not available
in TCG or other architectures.

This renames and moves gethugepagesize() to mmap-alloc.c where
fd-based analog of it is already implemented. This renames and moves
getrampagesize() to exec.c as it seems to be the common place for
helpers like this.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-03 11:30:59 +11:00
Suraj Jitindar Singh
9b44c836dc target/ppc: Add POWER9/ISAv3.00 to compat_table
compat_table contains the list of logical pvr compat modes which a cpu can
operate in. It is a list of struct CompatInfo which contains the given pvr
value for a compat mode, the pcr bits which should be set to operate in
that compat mode, the pcr level which must be present in pcr_supported for
a processor to support that compat mode and the max threads possible in
that compat mode.

Add an entry for the POWER9/ISAv3.00 logical pvr which represents a
processor running with support for logical pvr 0x0f000005. A processor
running in this mode should have PCR_COMPAT_3_00 set in the pcr (if
available in pcr_mask) and should have PCR_COMPAT_3_00 in pcr_supported
to indicate that it is capable of running in this compat mode.

Also add PCR_COMPAT_3_00 to the bits which must be set for all previous
compat modes. Since no processor models contain this bit yet in pcr_mask
it will never be set, but this ensures we don't forget to in the future.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-03 11:30:59 +11:00
Peter Maydell
ecb24d334a Queued sparc patch
-----BEGIN PGP SIGNATURE-----
 
 iQEcBAABAgAGBQJYtyaxAAoJEK0ScMxN0CebDmsIANYR6KJsQqD/EstekY+6aCG5
 L6rd5bwZz7ViQA8GMU7+yOFEAlbvCF+AROrcaGr0QCOC/v4jxsUFJjlmjj0QrhkT
 YPyfvQ8EMwUwCR6ScxpKOD4rMxRwvgviC4yzNmM6L+9U3lLLIn/po+6xDXamc5Fj
 rIm/6xhyPyzxHOZcBEXG3EsD6Meb32tI8NTJeMCkKXiHwxMkJe9Rn82oTzA2ObAv
 BiEP+t3KqHm/ZPnvByWcS590IL22Xat1SqwfC7OaNm21sRtfmMSmX/w3MRZJADTx
 XTBX3reJH+sZD11TvsTAY/zqmoT9VoBVWhq2hfjrqmukJluwHGsAEwVRFhhNRao=
 =Ieag
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/rth/tags/pull-tgt-20170302' into staging

Queued sparc patch

# gpg: Signature made Wed 01 Mar 2017 19:53:21 GMT
# gpg:                using RSA key 0xAD1270CC4DD0279B
# gpg: Good signature from "Richard Henderson <rth7680@gmail.com>"
# gpg:                 aka "Richard Henderson <rth@redhat.com>"
# gpg:                 aka "Richard Henderson <rth@twiddle.net>"
# Primary key fingerprint: 9CB1 8DDA F8E8 49AD 2AFC  16A4 AD12 70CC 4DD0 279B

* remotes/rth/tags/pull-tgt-20170302:
  target/sparc: Restore ldstub of odd asis

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-03-02 22:06:41 +00:00
Peter Maydell
ab711e216b ppc patch queue for 2017-03-01
I was hoping to get this pull request squeezed in before the soft
 freeze, but I ran into some difficulties during testing.  Everything
 here was at least posted before the soft freeze, so I'm hoping we can
 still merge it for 2.9.
 
 The biggest things here are:
     * Cleanups to handling of hashed page tables, that will make
       adding support for the POWER9 MMU easier
     * Cleanups to the XICS interrupt controller that will make
       implementing the powernv machine easier
     * TCG implementation of extended overflow and carry handling for
       POWER9
 
 It also includes:
     * Increasing the CPU limit for pseries to 1024 vCPUs
     * Generating proper OF node names in qemu (making hotplug and
       coldplug logic closer together)
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2
 
 iQIcBAABCAAGBQJYtlFaAAoJEGw4ysog2bOSPEIP/0eu92l+pP/05AROudxNu0NT
 O+JMUCuUM6phK/O/NU74G1Lbmr601Uu+kElcfuMRILwNQ3QWnyDk3OsIERV2oOlC
 9mqgqKfMIv29QZcN/U9gb2mBBakJV6z2gAF89CXpdmwxOjzyZR3DwiFccbCzKcl+
 4XFlGYYirye7fBZr6ZsSoZcM4EPIYdNe2bY7ymqisXnsvy0C3PYTZXG1s7+s5CDZ
 JWzKX1cgpePUj7rW0ecghP2MYKcKBxIzzqNbzl4ihObK7dSTfF2nUPVj6rYhtl8s
 V2vQkhUveR4FcE/pwVbPAUWbQWsTk1NUbRkqB8AqV+xbTfaFIN5MdFdvJzGD0gTk
 htzJFFXi6SQAsC0eH6RsF3JBFZe1HnDxql9A3htvTdPC1D/thFETg2nnb8Cd5EJM
 X7bdouew5txSP2SEYDfXA2/2IS8Fh3ZD+mXM6Y56Rl4KDEd+LeClfE2xDUAhU4C6
 rA0QGNu0d9nRlXEkj3vAQPRmdp1hzREve40ha0DnhQh4DSsCHJSk3yaDn2pQC1kZ
 01EhJF65AK1jdEyhT7Tk0PIfSAzkNjvS+K2Y9kFaGh5X9PVjCeMOrZaaRGS2+Im1
 3C7YJNUQb7VDjfHYKCQBzBXUJynzO8uDt9k8juhZyy67LvsHJBb+6DURSLV/y03k
 KuvTG7yrEUqpNmycjd/M
 =gI0E
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-2.9-20170301' into staging

ppc patch queue for 2017-03-01

I was hoping to get this pull request squeezed in before the soft
freeze, but I ran into some difficulties during testing.  Everything
here was at least posted before the soft freeze, so I'm hoping we can
still merge it for 2.9.

The biggest things here are:
    * Cleanups to handling of hashed page tables, that will make
      adding support for the POWER9 MMU easier
    * Cleanups to the XICS interrupt controller that will make
      implementing the powernv machine easier
    * TCG implementation of extended overflow and carry handling for
      POWER9

It also includes:
    * Increasing the CPU limit for pseries to 1024 vCPUs
    * Generating proper OF node names in qemu (making hotplug and
      coldplug logic closer together)

# gpg: Signature made Wed 01 Mar 2017 04:43:06 GMT
# gpg:                using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg:                 aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg:                 aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg:                 aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E  87DC 6C38 CACA 20D9 B392

* remotes/dgibson/tags/ppc-for-2.9-20170301: (50 commits)
  Add PowerPC 32-bit guest memory dump support
  ppc/xics: rename 'ICPState *' variables to 'icp'
  ppc/xics: move InterruptStatsProvider to the sPAPR machine
  ppc/xics: move ics-simple post_load under the machine
  ppc/xics: remove the XICSState classes
  ppc/xics: export the XICS init routines
  ppc/xics: move the ICP array under the sPAPR machine
  ppc/xics: register the reset handler of ICP objects
  ppc/xics: simplify spapr_dt_xics() interface
  ppc/xics: use the QOM interface to grab an ICP
  ppc/xics: move the cpu_setup() handler under the ICPState class
  ppc/xics: simplify the cpu_setup() handler
  ppc/xics: move kernel_xics_fd out of KVMXICSState
  ppc/xics: extend the QOM interface to handle ICPs
  ppc/xics: remove the XICS list of ICS
  ppc/xics: register the reset handler of ICS objects
  ppc/xics: remove xics_find_source()
  ppc/xics: use the QOM interface to resend irqs
  ppc/xics: use the QOM interface to get irqs
  ppc/xics: use the QOM interface under the sPAPR machine
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-03-02 13:50:55 +00:00
Peter Maydell
666095c852 x86 queue, 2017-02-27
"-cpu max" and query-cpu-model-expansion support for x86. This
 should be the last x86 pull request before 2.9 soft freeze.
 -----BEGIN PGP SIGNATURE-----
 
 iQIcBAABCAAGBQJYtFKvAAoJECgHk2+YTcWmyCwQAJEBM8NdOmN14yA3qTiTs0Kd
 D/3VWSxtu9t41g39+yho70c+ZnpqGW28/WbY7E4ovAPRIoUI/pmKACY42k+WrTmK
 MIBMesp1YnkxhwrFW9CwgqiUV8nr5ZMlzW/pQU/GaXbcH+7KfObeI93iGhtGjWvi
 4nNvrK7b5mz2wPU6s1j+Bz2mp0CMd/sktmiH93tyWU+KgU7NXvMDPInVkfvBMvZN
 5D6JLIeKxrndbaaUgvGbR4SUUmRs8TZFYfEbOdkkjIqh7MAKVKCCipFaxWIEfndr
 bs1MDmw6uIUaI55JuWaXb//BkS+jai1dmn+ZEzMoisetlheSwR8cEStFJBxcm/7n
 WQfxUd6TuNJ9PC1FIvb/OHUCGvzb+vtbEYAMmvCv8BVMnMivN7WNliu3rNVRgWBK
 OHClBPdgwoIx2cGt6ic1rvxHxcpjeJ/YXBzL/JbBkblckpxbNRcW1NRTZKHIe3vr
 JcPMEoP8g5d9ZHOG0WqBhKtJ3vUrxF3xqBKuR1Ha7QWpyKe9YF+RKrIA9dZkhLy0
 Jh0fPZn2PmQrbLuZTC1u7Sgp22Duy7RcfJ7SikR+uhMLtkvToqu3ywLteW6Ta1by
 oinb2UYMazwpAKKDcab4GdNJHPOuhnDw58osBVTyBiiN1tJjH+BhnVV4bYJVpaHI
 MJIx5QvwqocSO0qoDZxo
 =KPbC
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/ehabkost/tags/x86-pull-request' into staging

x86 queue, 2017-02-27

"-cpu max" and query-cpu-model-expansion support for x86. This
should be the last x86 pull request before 2.9 soft freeze.

# gpg: Signature made Mon 27 Feb 2017 16:24:15 GMT
# gpg:                using RSA key 0x2807936F984DC5A6
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>"
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF  D1AA 2807 936F 984D C5A6

* remotes/ehabkost/tags/x86-pull-request:
  i386: Improve query-cpu-model-expansion full mode
  i386: Implement query-cpu-model-expansion QMP command
  i386: Define static "base" CPU model
  i386: Don't set CPUClass::cpu_def on "max" model
  i386: Make "max" model not use any host CPUID info on TCG
  i386: Create "max" CPU model
  qapi-schema: Comment about full expansion of non-migration-safe models
  i386: Reorganize and document CPUID initialization steps
  i386: Rename X86CPU::host_features to X86CPU::max_features
  i386: Add ordering field to CPUClass
  i386: Unset cannot_destroy_with_object_finalize_yet on "host" model

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-03-02 11:18:01 +00:00
Richard Henderson
3db010c339 target/sparc: Restore ldstub of odd asis
Fixes the booting of ss20 roms.

Cc: qemu-stable@nongnu.org
Reported-by: Michael Russo <mike@papersolve.com>
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-03-02 06:52:43 +11:00
Peter Maydell
b28f9db1a7 target-arm queue:
* raspi2: add gpio controller and sdhost controller, with
    the wiring so the guest can switch which controller the
    SD card is attached to
    (this is sufficient to get raspbian kernels to boot)
  * GICv3: support state save/restore from KVM
  * update Linux headers to 4.11
  * refactor and QOMify the ARMv7M container object
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABCAAGBQJYta9VAAoJEDwlJe0UNgzeMVoQAJXv3EEcz8mfHQXGbjoak7Md
 RLwgsf2RRnjK9VsrXZuaH81FzpIUHpx3tV/w74w+VqOOUEo2g3QCv6kakZ2UYfS+
 tsf3FgNyX/z/OzNcOaxn6CzBLpHATOWsFZSPVf3FPh81ytUaB2tf3BJZR845cVIe
 0Yh+4klw2mYVMOX4UExyOrmifW58eQRKS3MFQTsKqchbOGdsQpCCnMCj5WhHC+rY
 tRQg1542/0seS3pY55Qpi6Q080ePky6AJQc672vPIqd2bDN/klGhmPpZIPokXn95
 vgjZe1/mdhcSX2xnUFiNyOBijjW7yUsL1Dx3LuoPH7tDqVsl3NWhJuhhfoSau1dY
 suPuckqrqTPz1AwFML0NN+lQLlH/6pfV2ZeRQJSf6bEhVBBjcyeCzy3vrRRmQqrc
 N2I9/4vCR22Yp+zIhGBwtNkgL3DVZFeiMQRwDe6lzMJhZOQ9Wz04bXHnEmo3Ht62
 AZ9IUQBc+mgoPlmJXAo6Jia7AVZ0x+Nwoa1okoptywXAOpIHazpAuW04vvjgpBy3
 VdcRqlDluv5azqHPmS26Adt54fZ21OkARKizE3kGOY47fJtMrOg+JK1AjvX3D/Iq
 t2yjYdF1zN7JfkJzDZKuvmSsnovTfiIeTATkD49E5zaU0inBt6eqSihZwKQmY3SY
 MzNb8mv8E7KraMw5HaWh
 =IC84
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20170228-1' into staging

target-arm queue:
 * raspi2: add gpio controller and sdhost controller, with
   the wiring so the guest can switch which controller the
   SD card is attached to
   (this is sufficient to get raspbian kernels to boot)
 * GICv3: support state save/restore from KVM
 * update Linux headers to 4.11
 * refactor and QOMify the ARMv7M container object

# gpg: Signature made Tue 28 Feb 2017 17:11:49 GMT
# gpg:                using RSA key 0x3C2525ED14360CDE
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>"
# gpg:                 aka "Peter Maydell <pmaydell@gmail.com>"
# gpg:                 aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>"
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83  15CF 3C25 25ED 1436 0CDE

* remotes/pmaydell/tags/pull-target-arm-20170228-1: (21 commits)
  bcm2835: add sdhost and gpio controllers
  bcm2835_gpio: add bcm2835 gpio controller
  hw/sd: add card-reparenting function
  qdev: Have qdev_set_parent_bus() handle devices already on a bus
  hw/intc/arm_gicv3_kvm: Reset GICv3 cpu interface registers
  target-arm: Add GICv3CPUState in CPUARMState struct
  hw/intc/arm_gicv3_kvm: Implement get/put functions
  hw/intc/arm_gicv3_kvm: Add ICC_SRE_EL1 register to vmstate
  update Linux headers to 4.11
  update-linux-headers: update for 4.11
  stm32f205: Rename 'nvic' local to 'armv7m'
  stm32f205: Create armv7m object without using armv7m_init()
  armv7m: Split systick out from NVIC
  armv7m: Don't put core v7M devices under CONFIG_STELLARIS
  armv7m: Make bitband device take the address space to access
  armv7m: Make NVIC expose a memory region rather than mapping itself
  armv7m: Make ARMv7M object take memory region link
  armv7m: Use QOMified armv7m object in armv7m_init()
  armv7m: QOMify the armv7m container
  armv7m: Move NVICState struct definition into header
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-03-01 17:58:54 +00:00
Mike Nawrocki
356bb70ed1 Add PowerPC 32-bit guest memory dump support
This patch extends support for the `dump-guest-memory` command to the
32-bit PowerPC architecture. It relies on the assumption that a 64-bit
guest will not dump a 32-bit core file (and vice versa).

[dwg: I suspect this patch won't cover all cases, in particular a
32-bit machine type on a 64-bit qemu build.  However, it does strictly
more than what we had before, so might as well apply as a starting
point]

Signed-off-by: Mike Nawrocki <michael.nawrocki@gtri.gatech.edu>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01 11:53:58 +11:00
Nikunj A Dadhania
b63d043418 target/ppc: add mcrxrx instruction
mcrxrx: Move to CR from XER Extended

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01 11:23:39 +11:00
Nikunj A Dadhania
c44027ffb9 target/ppc: add ov32 flag in divide operations
Add helper_div_compute_ov() in the int_helper for updating the overflow
flags.

For Divide Word:
SO, OV, and OV32 bits reflects overflow of the 32-bit result

For Divide DoubleWord:
SO, OV, and OV32 bits reflects overflow of the 64-bit result

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01 11:23:39 +11:00
Nikunj A Dadhania
61aa9a697a target/ppc: add ov32 flag for multiply low insns
For Multiply Word:
SO, OV, and OV32 bits reflects overflow of the 32-bit result

For Multiply DoubleWord:
SO, OV, and OV32 bits reflects overflow of the 64-bit result

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01 11:23:39 +11:00
Nikunj A Dadhania
1480d71cbe target/ppc: use tcg ops for neg instruction
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01 11:23:39 +11:00
Nikunj A Dadhania
dc0ad84449 target/ppc: update overflow flags for add/sub
* SO and OV reflects overflow of the 64-bit result in 64-bit mode and
  overflow of the low-order 32-bit result in 32-bit mode

* OV32 reflects overflow of the low-order 32-bit independent of the mode

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01 11:23:39 +11:00
Nikunj A Dadhania
33903d0aa4 target/ppc: update ca32 in arithmetic substract
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01 11:23:39 +11:00
Nikunj A Dadhania
6b10d008a0 target/ppc: update ca32 in arithmetic add
Adds routine to compute ca32 - gen_op_arith_compute_ca32

For 64-bit mode use the compute ca32 routine. While for 32-bit mode, CA
and CA32 will have same value.

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01 11:23:39 +11:00
Nikunj A Dadhania
dd09c36159 target/ppc: support for 32-bit carry and overflow
POWER ISA 3.0 adds CA32 and OV32 status in 64-bit mode. Add the flags
and corresponding defines.

Moreover, CA32 is updated when CA is updated and OV32 is updated when OV
is updated.

Arithmetic instructions:
    * Addition and Substractions:

        addic, addic., subfic, addc, subfc, adde, subfe, addme, subfme,
        addze, and subfze always updates CA and CA32.

        => CA reflects the carry out of bit 0 in 64-bit mode and out of
           bit 32 in 32-bit mode.
        => CA32 reflects the carry out of bit 32 independent of the
           mode.

        => SO and OV reflects overflow of the 64-bit result in 64-bit
           mode and overflow of the low-order 32-bit result in 32-bit
           mode
        => OV32 reflects overflow of the low-order 32-bit independent of
           the mode

    * Multiply Low and Divide:

        For mulld, divd, divde, divdu and divdeu: SO, OV, and OV32 bits
        reflects overflow of the 64-bit result

        For mullw, divw, divwe, divwu and divweu: SO, OV, and OV32 bits
        reflects overflow of the 32-bit result

     * Negate with OE=1 (nego)

       For 64-bit mode if the register RA contains
       0x8000_0000_0000_0000, OV and OV32 are set to 1.

       For 32-bit mode if the register RA contains 0x8000_0000, OV and
       OV32 are set to 1.

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01 11:23:39 +11:00
David Gibson
e78308fd39 target/ppc: Correct SDR1 masking
SDR_64_HTABORG, which indicates the bits of the SDR1 register to use for
the base of a 64-bit machine's hashed page table (HPT) isn't correct.  It
includes the top 46 bits of the register, but in fact the top 4 bits must
be zero (according to the ISA v2.07).  No actual implementation has
supported close to 2^60 bytes of physical address space, so it's kind of
irrelevant, but we might as well correct this.

In addition, although we checked for bad size values in SDR1, we never
reported an error if entirely invalid bits were set there.  Add this check
to ppc_store_sdr1().

Reported-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01 11:23:39 +11:00
Suraj Jitindar Singh
8d63351f9f target/ppc: Remove the function ppc_hash64_set_sdr1()
The function ppc_hash64_set_sdr1 basically checked the htabsize and set an
error if it was too big, otherwise it just stored the value in SPR_SDR1.

Given that the only function which calls ppc_hash64_set_sdr1() is
ppc_store_sdr1(), why not handle the checking in ppc_store_sdr1() to avoid
the extra function call. Note that ppc_store_sdr1() already stores the
value in SPR_SDR1 anyway, so we were doing it twice.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
[dwg: Remove unnecessary error temporary]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01 11:23:39 +11:00
David Gibson
e57ca75ce3 target/ppc: Manage external HPT via virtual hypervisor
The pseries machine type implements the behaviour of a PAPR compliant
hypervisor, without actually executing such a hypervisor on the virtual
CPU.  To do this we need some hooks in the CPU code to make hypervisor
facilities get redirected to the machine instead of emulated internally.

For hypercalls this is managed through the cpu->vhyp field, which points
to a QOM interface with a method implementing the hypercall.

For the hashed page table (HPT) - also a hypervisor resource - we use an
older hack.  CPUPPCState has an 'external_htab' field which when non-NULL
indicates that the HPT is stored in qemu memory, rather than within the
guest's address space.

For consistency - and to make some future extensions easier - this merges
the external HPT mechanism into the vhyp mechanism.  Methods are added
to vhyp for the basic operations the core hash MMU code needs: map_hptes()
and unmap_hptes() for reading the HPT, store_hpte() for updating it and
hpt_mask() to retrieve its size.

To match this, the pseries machine now sets these vhyp fields in its
existing vhyp class, rather than reaching into the cpu object to set the
external_htab field.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
2017-03-01 11:23:39 +11:00
David Gibson
36778660d7 target/ppc: Eliminate htab_base and htab_mask variables
CPUPPCState includes fields htab_base and htab_mask which store the base
address (GPA) and size (as a mask) of the guest's hashed page table (HPT).
These are set when the SDR1 register is updated.

Keeping these in sync with the SDR1 is actually a little bit fiddly, and
probably not useful for performance, since keeping them expands the size of
CPUPPCState.  It also makes some upcoming changes harder to implement.

This patch removes these fields, in favour of calculating them directly
from the SDR1 contents when necessary.

This does make a change to the behaviour of attempting to write a bad value
(invalid HPT size) to the SDR1 with an mtspr instruction.  Previously, the
bad value would be stored in SDR1 and could be retrieved with a later
mfspr, but the HPT size as used by the softmmu would be, clamped to the
allowed values.  Now, writing a bad value is treated as a no-op.  An error
message is printed in both new and old versions.

I'm not sure which behaviour, if either, matches real hardware.  I don't
think it matters that much, since it's pretty clear that if an OS writes
a bad value to SDR1, it's not going to boot.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
2017-03-01 11:23:39 +11:00
David Gibson
7222b94a83 target/ppc: Cleanup HPTE accessors for 64-bit hash MMU
Accesses to the hashed page table (HPT) are complicated by the fact that
the HPT could be in one of three places:
   1) Within guest memory - when we're emulating a full guest CPU at the
      hardware level (e.g. powernv, mac99, g3beige)
   2) Within qemu, but outside guest memory - when we're emulating user and
      supervisor instructions within TCG, but instead of emulating
      the CPU's hypervisor mode, we just emulate a hypervisor's behaviour
      (pseries in TCG or KVM-PR)
   3) Within the host kernel - a pseries machine using KVM-HV
      acceleration.  Mostly accesses to the HPT are handled by KVM,
      but there are a few cases where qemu needs to access it via a
      special fd for the purpose.

In order to batch accesses to the fd in case (3), we use a somewhat awkward
ppc_hash64_start_access() / ppc_hash64_stop_access() pair, which for case
(3) reads / releases several HPTEs from the kernel as a batch (usually a
whole PTEG).  For cases (1) & (2) it just returns an address value.  The
actual HPTE load helpers then need to interpret the returned token
differently in the 3 cases.

This patch keeps the same basic structure, but simplfiies the details.
First start_access() / stop_access() are renamed to map_hptes() and
unmap_hptes() to make their operation more obvious.  Second, map_hptes()
now always returns a qemu pointer, which can always be used in the same way
by the load_hpte() helpers.  In case (1) it comes from address_space_map()
in case (2) directly from qemu's HPT buffer and in case (3) from a
temporary buffer read from the KVM fd.

While we're at it, make things a bit more consistent in terms of types and
variable names: avoid variables named 'index' (it shadows index(3) which
can lead to confusing results), use 'hwaddr ptex' for HPTE indices and
uint64_t for each of the HPTE words, use ptex throughout the call stack
instead of pte_offset in some places (we still need that at the bottom
layer, but nowhere else).

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01 11:23:39 +11:00
David Gibson
7d6250e3d1 target/ppc: SDR1 is a hypervisor resource
At present the SDR1 register - the base of the system's hashed page table
(HPT) - is represented as an SPR with supervisor read and write permission.
However, on CPUs which have a hypervisor mode, the SDR1 is a hypervisor
only resource.  Change the permission checking on the SPR to reflect this.

Now that this is done, we don't need to check for an external HPT executing
mtsdr1: an external HPT only applies when we're emulating the behaviour of
a hypervisor, rather than modelling the CPU's hypervisor mode internally,
so if we're permitted to execute mtsdr1, we don't have an external HPT.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
2017-03-01 11:23:39 +11:00
David Gibson
b7b0b1f13a target/ppc: Merge cpu_ppc_set_vhyp() with cpu_ppc_set_papr()
cpu_ppc_set_papr() sets up various aspects of CPU state for use with PAPR
paravirtualized guests.  However, it doesn't set the virtual hypervisor,
so callers must also call cpu_ppc_set_vhyp() so that PAPR hypercalls are
handled properly.  This is a bit silly, so fold setting the virtual
hypervisor into cpu_ppc_set_papr().

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
2017-03-01 11:23:39 +11:00
David Gibson
1ad9f0a464 target/ppc: Fix KVM-HV HPTE accessors
When a 'pseries' guest is running with KVM-HV, the guest's hashed page
table (HPT) is stored within the host kernel, so it is not directly
accessible to qemu.  Most of the time, qemu doesn't need to access it:
we're using the hardware MMU, and KVM itself implements the guest
hypercalls for manipulating the HPT.

However, qemu does need access to the in-KVM HPT to implement
get_phys_page_debug() for the benefit of the gdbstub, and maybe for
other debug operations.

To allow this, 7c43bca "target-ppc: Fix page table lookup with kvm
enabled" added kvmppc_hash64_read_pteg() to target/ppc/kvm.c to read
in a batch of HPTEs from the KVM table.  Unfortunately, there are a
couple of problems with this:

First, the name of the function implies it always reads a whole PTEG
from the HPT, but in fact in some cases it's used to grab individual
HPTEs (which ends up pulling 8 HPTEs, not aligned to a PTEG from the
kernel).

Second, and more importantly, the code to read the HPTEs from KVM is
simply wrong, in general.  The data from the fd that KVM provides is
designed mostly for compact migration rather than this sort of one-off
access, and so needs some decoding for this purpose.  The current code
will work in some cases, but if there are invalid HPTEs then it will
not get sane results.

This patch rewrite the HPTE reading function to have a simpler
interface (just read n HPTEs into a caller provided buffer), and to
correctly decode the stream from the kernel.

For consistency we also clean up the similar function for altering
HPTEs within KVM (introduced in c138593 "target-ppc: Update
ppc_hash64_store_hpte to support updating in-kernel htab").

Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01 11:23:39 +11:00
Nikunj A Dadhania
f32899de97 target/ppc: introduce helper_update_ov_legacy
Removes duplicate code and will be useful for consolidating flags

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01 11:23:39 +11:00
Nikunj A Dadhania
1bd33d0d7c target/ppc: optimize gen_write_xer()
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01 11:23:38 +11:00
Nikunj A Dadhania
00b7078831 target/ppc: move cpu_[read, write]_xer to cpu.c
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01 11:23:38 +11:00
Vijaya Kumar K
d3a3e52962 target-arm: Add GICv3CPUState in CPUARMState struct
Add gicv3state void pointer to CPUARMState struct
to store GICv3CPUState.

In case of usecase like CPU reset, we need to reset
GICv3CPUState of the CPU. In such scenario, this pointer
becomes handy.

Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@cavium.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Message-id: 1487850673-26455-5-git-send-email-vijay.kilari@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-28 17:10:00 +00:00
Peter Maydell
7d1730b7d9 trivial patches for 2017-02-28
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABCAAGBQJYtRwrAAoJEHAbT2saaT5ZQSQIAKWIXrxhIGO6hGEDc50YL6x6
 tQMOnPQOulLtS76rGDAZrJwc47wqpXUtBCuevgwwqbxraLHF4LRnMf0I+xSR+lTt
 PF9vmgDgB4BVDpSTqphjaCBccXPYPqXzUtYaDcT6xePy8aB+/40nqsnby5hf+BXT
 zNpZZrn23papmftS3LnZ5j/lKNIsIlS/v5WIy8xNK0pBTKx4W1ZzDWrYq8crqW+v
 NqQSoVbNOEHOt1+C+nEX6gxUnY6rJXAVB0ICT0fSY9NRhFjPeu1Fx6EtCRNaluXm
 zOZ7t4kTjpB7IcHy3lqDTaWV6VVwBFUym5pfwjRLcge4ln+a5O454+/i8mxerfo=
 =jvkw
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/mjt/tags/trivial-patches-fetch' into staging

trivial patches for 2017-02-28

# gpg: Signature made Tue 28 Feb 2017 06:43:55 GMT
# gpg:                using RSA key 0x701B4F6B1A693E59
# gpg: Good signature from "Michael Tokarev <mjt@tls.msk.ru>"
# gpg:                 aka "Michael Tokarev <mjt@corpit.ru>"
# gpg:                 aka "Michael Tokarev <mjt@debian.org>"
# Primary key fingerprint: 6EE1 95D1 886E 8FFB 810D  4324 457C E0A0 8044 65C5
#      Subkey fingerprint: 7B73 BAD6 8BE7 A2C2 8931  4B22 701B 4F6B 1A69 3E59

* remotes/mjt/tags/trivial-patches-fetch:
  syscall: fixed mincore(2) not failing with ENOMEM
  hw/acpi/tco.c: fix tco timer stop
  lm32: milkymist-tmu2: fix a third integer overflow
  qemu-options.hx: add missing id=chr0 chardev argument in vhost-user example
  Update copyright year
  tests/prom-env: Enable the test for the sun4u machine, too
  cadence_gem: Remove unused parameter debug message
  register: fix incorrect read mask
  ide: remove undefined behavior in ide-test
  CODING_STYLE: Mention preferred comment form
  hw/core/register: Mark the device with cannot_instantiate_with_device_add_yet
  hw/core/or-irq: Mark the device with cannot_instantiate_with_device_add_yet
  softfloat: Use correct type in float64_to_uint64_round_to_zero()
  target/s390x: Fix typo

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-28 16:22:41 +00:00
Peter Maydell
1bbe5dc66b target-arm queue:
* raspi2: implement RNG module
  * raspi2: implement new SD card controller (but don't wire it up)
  * sdhci: bugfixes for block transfers
  * virt: fix cpu object reference leak
  * Add missing fp_access_check() to aarch64 crypto instructions
  * cputlb: Don't assume do_unassigned_access() never returns
  * virt: Add a user option to disallow ITS instantiation
  * i.MX timers: fix reset handling
  * ARMv7M NVIC: rewrite to fix broken priority handling and masking
  * exynos: Fix proper mapping of CPUs by providing real cluster ID
  * exynos: Fix Linux kernel division by zero for PLLs
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABCAAGBQJYtW/TAAoJEDwlJe0UNgzezv4P/3j+WOVgVlNL8AQ3RFEzzzz4
 IszdrQIFcZ5ICT3MDgH/JMjkpj/C13eGo9eiIFlOvVjtsLlneW10frEB6SGP4ype
 KpFDHji0cm9MT7gdbgbWbextGU8w7xWV43JmSmEuOxkF/r64u/Ap3CXudB58A+Rv
 NvbJMHkkR5Q0MIDA4EkOCLn/Ihh78sd99p8+EV3Gu89KiiB4xRf9D3k/O+Sdh58L
 yvPNat0tjJolzZkAUf6RieFN1F7oBXazR13+E8fDy5OTr25K+S7mehBwSJtQ7dGo
 VjhR7eMJdyyzi+l+OezQFCUmZI9pENcDdhspSl2mOkPRrQi4gwjEszPcmcNhCNGQ
 mguQjk7f5KHtLDDzL1HFr+4sKZdoptXZC18JupjN9oCHJvMq4MDHJaUH0bwrHals
 GhE7cM3aNg8ItJu694ruMLY13Z0+B+TmSLFktRYrjJe3qJEfOQE4EKWXXUZaEe5j
 L13HPP4nInAUU7kvpuepiYHiR4zBTTgEqRBVdQ/qCkLSuO/EH2TbT9u6pifAtI1S
 OkBidnbatWflUwLMMa6jt7ZUx+yDsH7y7C1WxmytnPzKudMMOZ5MxI54yLgEEFTs
 SoelwzfSZb2PlOw3h3UwyRDz3CehkDMUMqzIoqF7Wn/UVb6GHvldq/eVpKOOxtG7
 nVTTYBFuSil0LV/LST4X
 =3qLp
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20170228' into staging

target-arm queue:
 * raspi2: implement RNG module
 * raspi2: implement new SD card controller (but don't wire it up)
 * sdhci: bugfixes for block transfers
 * virt: fix cpu object reference leak
 * Add missing fp_access_check() to aarch64 crypto instructions
 * cputlb: Don't assume do_unassigned_access() never returns
 * virt: Add a user option to disallow ITS instantiation
 * i.MX timers: fix reset handling
 * ARMv7M NVIC: rewrite to fix broken priority handling and masking
 * exynos: Fix proper mapping of CPUs by providing real cluster ID
 * exynos: Fix Linux kernel division by zero for PLLs

# gpg: Signature made Tue 28 Feb 2017 12:40:51 GMT
# gpg:                using RSA key 0x3C2525ED14360CDE
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>"
# gpg:                 aka "Peter Maydell <pmaydell@gmail.com>"
# gpg:                 aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>"
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83  15CF 3C25 25ED 1436 0CDE

* remotes/pmaydell/tags/pull-target-arm-20170228: (27 commits)
  hw/arm/exynos: Fix proper mapping of CPUs by providing real cluster ID
  hw/arm/exynos: Fix Linux kernel division by zero for PLLs
  bcm2835_sdhost: add bcm2835 sdhost controller
  armv7m: Allow SHCSR writes to change pending and active bits
  armv7m: Raise correct kind of UsageFault for attempts to execute ARM code
  armv7m: Check exception return consistency
  armv7m: Extract "exception taken" code into functions
  armv7m: VECTCLRACTIVE and VECTRESET are UNPREDICTABLE
  armv7m: Simpler and faster exception start
  armv7m: Remove unused armv7m_nvic_acknowledge_irq() return value
  armv7m: Escalate exceptions to HardFault if necessary
  arm: gic: Remove references to NVIC
  armv7m: Fix condition check for taking exceptions
  armv7m: Rewrite NVIC to not use any GIC code
  armv7m: Implement reading and writing of PRIGROUP
  armv7m: Rename nvic_state to NVICState
  ARM i.MX timers: fix reset handling
  hw/arm/virt: Add a user option to disallow ITS instantiation
  cputlb: Don't assume do_unassigned_access() never returns
  Add missing fp_access_check() to aarch64 crypto instructions
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-28 14:50:17 +00:00
Peter Maydell
a57aaa4e74 Enable MTTCG for Alpha guest
-----BEGIN PGP SIGNATURE-----
 
 iQEcBAABAgAGBQJYtMelAAoJEK0ScMxN0CebUFYH/29LDqebPb3WvvuRweQzd79f
 dLY0V397u4rs13/ICPJ8AZphZrMUd//INEmfy5c5guH23Yf9gqdKyf7ItlmLRsnM
 3xIokeRdsuYX4f1Ja09UAaT8fRK2J8KCBWv3nv4/wdPY81ssbACeWLlW+FB/HoRL
 PBvAHjMdu66mVGmg0Sp+lEAAzc1Dw6aL909U90CscLRTaUJrERJ+5Lre4RnnTpqK
 WOoq5XI3y5eun8MfCg70vSU7hg3XRnvYBd1oHb3EZwpb4B4jwrpcKq1L7kRnh08k
 b08jF9Z4aPcCWBInVT/G6RHpIA8wH7U4Y6DHJ5JSRjm+g717VrTXWlDSQr2qdQc=
 =Mpr0
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/rth/tags/pull-axp-20170228' into staging

Enable MTTCG for Alpha guest

# gpg: Signature made Tue 28 Feb 2017 00:43:17 GMT
# gpg:                using RSA key 0xAD1270CC4DD0279B
# gpg: Good signature from "Richard Henderson <rth7680@gmail.com>"
# gpg:                 aka "Richard Henderson <rth@redhat.com>"
# gpg:                 aka "Richard Henderson <rth@twiddle.net>"
# Primary key fingerprint: 9CB1 8DDA F8E8 49AD 2AFC  16A4 AD12 70CC 4DD0 279B

* remotes/rth/tags/pull-axp-20170228:
  target/alpha: Enable MTTCG by default

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-28 13:01:50 +00:00
Peter Maydell
e13886e3a7 armv7m: Raise correct kind of UsageFault for attempts to execute ARM code
M profile doesn't implement ARM, and the architecturally required
behaviour for attempts to execute with the Thumb bit clear is to
generate a UsageFault with the CFSR INVSTATE bit set.  We were
incorrectly implementing this as generating an UNDEFINSTR UsageFault;
fix this.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2017-02-28 12:08:19 +00:00
Peter Maydell
aa488fe3bb armv7m: Check exception return consistency
Implement the exception return consistency checks
described in the v7M pseudocode ExceptionReturn().

Inspired by a patch from Michael Davidsaver's series, but
this is a reimplementation from scratch based on the
ARM ARM pseudocode.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2017-02-28 12:08:19 +00:00
Peter Maydell
39ae2474e3 armv7m: Extract "exception taken" code into functions
Extract the code from the tail end of arm_v7m_do_interrupt() which
enters the exception handler into a pair of utility functions
v7m_exception_taken() and v7m_push_stack(), which correspond roughly
to the pseudocode PushStack() and ExceptionTaken().

This also requires us to move the arm_v7m_load_vector() utility
routine up so we can call it.

Handling illegal exception returns has some cases where we want to
take a UsageFault either on an existing stack frame or with a new
stack frame but with a specific LR value, so we want to be able to
call these without having to go via arm_v7m_cpu_do_interrupt().

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2017-02-28 12:08:18 +00:00
Michael Davidsaver
a25dc805e2 armv7m: Simpler and faster exception start
All the places in armv7m_cpu_do_interrupt() which pend an
exception in the NVIC are doing so for synchronous
exceptions. We know that we will always take some
exception in this case, so we can just acknowledge it
immediately, rather than returning and then immediately
being called again because the NVIC has raised its outbound
IRQ line.

Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
[PMM: tweaked commit message; added DEBUG to the set of
exceptions we handle immediately, since it is synchronous
when it results from the BKPT instruction]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2017-02-28 12:08:18 +00:00
Peter Maydell
a5d8235545 armv7m: Remove unused armv7m_nvic_acknowledge_irq() return value
Having armv7m_nvic_acknowledge_irq() return the new value of
env->v7m.exception and its one caller assign the return value
back to env->v7m.exception is pointless. Just make the return
type void instead.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2017-02-28 12:08:18 +00:00
Michael Davidsaver
a73c98e159 armv7m: Escalate exceptions to HardFault if necessary
The v7M exception architecture requires that if a synchronous
exception cannot be taken immediately (because it is disabled
or at too low a priority) then it should be escalated to
HardFault (and the HardFault exception is then taken).
Implement this escalation logic.

Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
[PMM: extracted from another patch]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2017-02-28 12:08:17 +00:00
Peter Maydell
7ecdaa4a96 armv7m: Fix condition check for taking exceptions
The M profile condition for when we can take a pending exception or
interrupt is not the same as that for A/R profile.  The code
originally copied from the A/R profile version of the
cpu_exec_interrupt function only worked by chance for the
very simple case of exceptions being masked by PRIMASK.
Replace it with a call to a function in the NVIC code that
correctly compares the priority of the pending exception
against the current execution priority of the CPU.

[Michael Davidsaver's patchset had a patch to do something
similar but the implementation ended up being a rewrite.]

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2017-02-28 12:08:17 +00:00
Nick Reilly
a4f5c5b723 Add missing fp_access_check() to aarch64 crypto instructions
The aarch64 crypto instructions for AES and SHA are missing the
check for if the FPU is enabled.

Signed-off-by: Nick Reilly <nreilly@blackberry.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-28 12:08:15 +00:00
Stefan Weil
8dc52350f9 target/s390x: Fix typo
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2017-02-28 09:03:38 +03:00
Richard Henderson
5ee4f3c2c7 target/alpha: Enable MTTCG by default
Alpha has a weak memory ordering and issues all of the required barriers.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-02-28 11:41:46 +11:00
Pranith Kumar
1c1df0198b linux-user: Add signal handling support for x86_64
Note that x86_64 has only _rt signal handlers. This implementation
attempts to share code with the x86_32 implementation.

CC: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Allan Wirth <awirth@akamai.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20170226165345.8757-1-bobby.prani@gmail.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2017-02-27 23:10:02 +01:00
Eduardo Habkost
b8097deb35 i386: Improve query-cpu-model-expansion full mode
This keeps the same results on type=static expansion, but make
type=full expansion return every single QOM property on the CPU
object that have a different value from the "base' CPU model,
plus all the CPU feature flag properties.

Cc: Jiri Denemark <jdenemar@redhat.com>
Message-Id: <20170222190029.17243-4-ehabkost@redhat.com>
Tested-by: Jiri Denemark <jdenemar@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2017-02-27 13:23:35 -03:00
Eduardo Habkost
f99fd7ca2a i386: Implement query-cpu-model-expansion QMP command
Implement query-cpu-model-expansion for target-i386.

This should meet all the requirements while being simple. In the
case of static expansion, it will use the new "base" CPU model,
and in the case of full expansion, it will keep the original CPU
model name+props, and append extra properties.

A future follow-up should improve the implementation of
type=full, so that it returns more detailed data, including every
writable QOM property in the CPU object.

Cc: libvir-list@redhat.com
Cc: Jiri Denemark <jdenemar@redhat.com>
Message-Id: <20170222190029.17243-3-ehabkost@redhat.com>
Tested-by: Jiri Denemark <jdenemar@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2017-02-27 13:23:31 -03:00
Eduardo Habkost
5adbed3088 i386: Define static "base" CPU model
The query-cpu-model-expand QMP command needs at least one static
model, to allow the "static" expansion mode to be implemented.
Instead of defining static versions of every CPU model, define a
"base" CPU model that has absolutely no feature flag enabled.

Despite having no CPUID data set at all, "-cpu base" is even a
functional CPU:

* It can boot a Slackware Linux 1.01 image with a Linux 0.99.12
  kernel[1].
* It is even possible to boot[2] a modern Fedora x86_64 guest by
  manually enabling the following CPU features:
  -cpu base,+lm,+msr,+pae,+fpu,+cx8,+cmov,+sse,+sse2,+fxsr

[1] http://www.qemu-advent-calendar.org/2014/#day-1
[2] This is what can be seen in the guest:
    [root@localhost ~]# cat /proc/cpuinfo
    processor       : 0
    vendor_id       : unknown
    cpu family      : 0
    model           : 0
    model name      : 00/00
    stepping        : 0
    physical id     : 0
    siblings        : 1
    core id         : 0
    cpu cores       : 1
    apicid          : 0
    initial apicid  : 0
    fpu             : yes
    fpu_exception   : yes
    cpuid level     : 1
    wp              : yes
    flags           : fpu msr pae cx8 cmov fxsr sse sse2 lm nopl
    bugs            :
    bogomips        : 5832.70
    clflush size    : 64
    cache_alignment : 64
    address sizes   : 36 bits physical, 48 bits virtual
    power management:

    [root@localhost ~]# x86info -v -a
    x86info v1.30.  Dave Jones 2001-2011
    Feedback to <davej@redhat.com>.

    No TSC, MHz calculation cannot be performed.
    Unknown vendor (0)
    MP Table:

    Family: 0 Model: 0 Stepping: 0
    CPU Model (x86info's best guess):

    eax in: 0x00000000, eax = 00000001 ebx = 00000000 ecx = 00000000 edx = 00000000
    eax in: 0x00000001, eax = 00000000 ebx = 00000800 ecx = 00000000 edx = 07008161

    eax in: 0x80000000, eax = 80000001 ebx = 00000000 ecx = 00000000 edx = 00000000
    eax in: 0x80000001, eax = 00000000 ebx = 00000000 ecx = 00000000 edx = 20000000

    Feature flags:
     fpu            Onboard FPU
     msr            Model-Specific Registers
     pae            Physical Address Extensions
     cx8            CMPXCHG8 instruction
     cmov           CMOV instruction
     fxsr           FXSAVE and FXRSTOR instructions
     sse            SSE support
     sse2           SSE2 support

    Long NOPs supported: yes

    Address sizes : 0 bits physical, 0 bits virtual
    0MHz processor (estimate).

     running at an estimated 0MHz
    [root@localhost ~]#

Message-Id: <20170222190029.17243-2-ehabkost@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Tested-by: Jiri Denemark <jdenemar@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2017-02-27 13:23:27 -03:00
Eduardo Habkost
0bacd8b304 i386: Don't set CPUClass::cpu_def on "max" model
Host CPUID info is used by the "max" CPU model only in KVM mode.
Move the initialization of CPUID data for "max" from class_init
to instance_init, and don't set CPUClass::cpu_def for "max".

Message-Id: <20170222183919.11928-4-ehabkost@redhat.com>
Tested-by: Richard W.M. Jones <rjones@redhat.com>
Tested-by: Jiri Denemark <jdenemar@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2017-02-27 13:23:25 -03:00
Eduardo Habkost
6900d1cc8a i386: Make "max" model not use any host CPUID info on TCG
Instead of reporting host CPUID data on "max", use the qemu64 CPU
model as reference to initialize CPUID
vendor/family/model/stepping/model-id.

Message-Id: <20170222183919.11928-3-ehabkost@redhat.com>
Tested-by: Richard W.M. Jones <rjones@redhat.com>
Tested-by: Jiri Denemark <jdenemar@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2017-02-27 13:23:23 -03:00
Eduardo Habkost
c62f2630f8 i386: Create "max" CPU model
Rename the existing "host" CPU model to "max, and set it to
kvm_enabled=false. The new "max" CPU model will be able to enable
all features supported by TCG out of the box, because its logic
is based on x86_cpu_get_supported_feature_word(), which already
works with TCG.

A new KVM-specific "host" class was added, that simply inherits
everything from "max" except the 'ordering' and 'description'
fields.

Message-Id: <20170222183919.11928-2-ehabkost@redhat.com>
Tested-by: Richard W.M. Jones <rjones@redhat.com>
Tested-by: Jiri Denemark <jdenemar@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2017-02-27 13:23:21 -03:00
Eduardo Habkost
b8d834a00f i386: Reorganize and document CPUID initialization steps
CPU runnability checks and CPU model expansion have slightly
different requirements. Document the steps involved in loading a
CPU model and realizing a CPU, so their requirements and purpose
are clearly defined.

This patch doesn't change any implementation. It just add
comments, rename the x86_cpu_load_features() function for clarity
(so it won't be confused with x86_cpu_load_def()), and move
x86_cpu_filter_features() closer to it.

Message-Id: <20170116211124.29245-2-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2017-02-27 13:23:09 -03:00
Eduardo Habkost
44bd8e5306 i386: Rename X86CPU::host_features to X86CPU::max_features
Rename the field and add a small comment to make its purpose
clearer.

Message-Id: <20170119210449.11991-4-ehabkost@redhat.com>
Tested-by: Jiri Denemark <jdenemar@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2017-02-27 13:23:06 -03:00
Eduardo Habkost
f48c883703 i386: Add ordering field to CPUClass
Instead of using kvm_enabled to order the "-cpu help" list, use a
new "ordering" field for that.

Message-Id: <20170119210449.11991-3-ehabkost@redhat.com>
Tested-by: Jiri Denemark <jdenemar@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2017-02-27 13:23:03 -03:00
Eduardo Habkost
771a13e90d i386: Unset cannot_destroy_with_object_finalize_yet on "host" model
The class is now safe because the assert(kvm_enabled()) line was
removed by commit e435601058.

Message-Id: <20170119210449.11991-2-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2017-02-27 13:22:58 -03:00
Peter Maydell
28f997a82c This is the MTTCG pull-request as posted yesterday.
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABAgAGBQJYsBZfAAoJEPvQ2wlanipElJ0H+QGoStSPeHrvKu7Q07v4F9zM
 Pvf05gRsaxvXl7UbwmXC4oKhvZf9rVJ6ITk0x/y0WvmK0mHCmNBWkC0nn5UFL5IH
 cdxetLz21Q+Ghpc36tZvqn2HYwRQFoEznge2LdtBDG0TyVA4jwquHU3HCG2D51zi
 BaImI6lYW1e4ejjZHw8cEInSxsj/HJZE4pPas2Tkci+uAnrJroErwBVRRcE/y/Tn
 aupl9TJFs2JdyJFNDibIm0kjB+i+jvCiLgYjbKZ/dR/+GZt73TtiBk/q9ZOFjdmT
 7YFPI3F46QbGHoZahtzh0Xt7WMj94SlQgQ9OJ3zmNMfpXrze6Yc78xo/nbQ33U0=
 =wR0/
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/stsquad/tags/pull-mttcg-240217-1' into staging

This is the MTTCG pull-request as posted yesterday.

# gpg: Signature made Fri 24 Feb 2017 11:17:51 GMT
# gpg:                using RSA key 0xFBD0DB095A9E2A44
# gpg: Good signature from "Alex Bennée (Master Work Key) <alex.bennee@linaro.org>"
# Primary key fingerprint: 6685 AE99 E751 67BC AFC8  DF35 FBD0 DB09 5A9E 2A44

* remotes/stsquad/tags/pull-mttcg-240217-1: (24 commits)
  tcg: enable MTTCG by default for ARM on x86 hosts
  hw/misc/imx6_src: defer clearing of SRC_SCR reset bits
  target-arm: ensure all cross vCPUs TLB flushes complete
  target-arm: don't generate WFE/YIELD calls for MTTCG
  target-arm/powerctl: defer cpu reset work to CPU context
  cputlb: introduce tlb_flush_*_all_cpus[_synced]
  cputlb: atomically update tlb fields used by tlb_reset_dirty
  cputlb: add tlb_flush_by_mmuidx async routines
  cputlb and arm/sparc targets: convert mmuidx flushes from varg to bitmap
  cputlb: introduce tlb_flush_* async work.
  cputlb: tweak qemu_ram_addr_from_host_nofail reporting
  cputlb: add assert_cpu_is_self checks
  tcg: handle EXCP_ATOMIC exception for system emulation
  tcg: enable thread-per-vCPU
  tcg: enable tb_lock() for SoftMMU
  tcg: remove global exit_request
  tcg: drop global lock during TCG code execution
  tcg: rename tcg_current_cpu to tcg_current_rr_cpu
  tcg: add kick timer for single-threaded vCPU emulation
  tcg: add options for enabling MTTCG
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-25 18:43:52 +00:00
Peter Maydell
2421f381dc A selection of s390x patches:
- cleanups, fixes and improvements
 - program check loop detection (useful with the corresponding kernel
   patch)
 - wire up virtio-crypto for ccw
 - and finally support many virtqueues for virtio-ccw
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJYr/qXAAoJEN7Pa5PG8C+vzSYP+wR/mA4wmXh0Jj8zzxeJaeQa
 UNNwR7Ege4KjdL0DKXw/Uy2S/H2qGZvD4cb3JLIwp15BSilmcxRGS+v18ooBuRtx
 X8+W2peH/Ldk4SAGbfXyRR4EXom4ZmmHtgdoWYPUhgq2BimH1vBcY06uHOkJ4zTP
 vBfpmvKL53SjjHF6b9NmlprSDrn8cbQgqqxTWc0YL0aEcFTcxpBfr98dCfrNfk8b
 k6f324hY+3YC7rdvLAsBx3tNjDmEoEh4aidGyECKOWiy2Bt2hQ/ZhxVUk7cFV30M
 F0mttRJSxuBY9xYfmuxTKkm2ttIH0BiOhFmE5+YEj7ot+iqBslyYHR2prkZC66v+
 wQ9Ynx8ys0ec/IkHx2uIt8iOdAiq/K5gJkyjEw6ekg70OOGrTtyv5y6G9FOc4B4W
 ms7eUnhIgr5rEv/oQgCSgCUlAUm6MWW/BtffqmKZ7M2/7l8Y3T1U4f9383sKZtIT
 7xr/AtV30yH695r+bllEljIjgMU5EWUDpA2kBCC6tzJQ0KYSoICSGloxKNEK3Z6X
 EsYby7YjLArTlvsLJ4y2k/BPzcM4IYJX9NDjCmMRpR2I46Nb35uwR73EZx6JS6fw
 dKmdx0qSZbaMbmwIJZzVz4kzG9z6gePkCvaEmPa99ZgnaZ0igm4y5W6Q8fLEn1Jz
 zy277Wciim5mnZWuJAbl
 =uM3N
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/cohuck/tags/s390x-20170224' into staging

A selection of s390x patches:
- cleanups, fixes and improvements
- program check loop detection (useful with the corresponding kernel
  patch)
- wire up virtio-crypto for ccw
- and finally support many virtqueues for virtio-ccw

# gpg: Signature made Fri 24 Feb 2017 09:19:19 GMT
# gpg:                using RSA key 0xDECF6B93C6F02FAF
# gpg: Good signature from "Cornelia Huck <huckc@linux.vnet.ibm.com>"
# gpg:                 aka "Cornelia Huck <cornelia.huck@de.ibm.com>"
# Primary key fingerprint: C3D0 D66D C362 4FF6 A8C0  18CE DECF 6B93 C6F0 2FAF

* remotes/cohuck/tags/s390x-20170224:
  s390x/css: handle format-0 TIC CCW correctly
  s390x/arch_dump: pass cpuid into notes sections
  s390x/arch_dump: use proper note name and note size
  virtio-ccw: support VIRTIO_QUEUE_MAX virtqueues
  s390x: bump ADAPTER_ROUTES_MAX_GSI
  virtio-ccw: check flic->adapter_routes_max_batch
  s390x: add property adapter_routes_max_batch
  virtio-ccw: Check the number of vqs in CCW_CMD_SET_IND
  virtio-ccw: add virtio-crypto-ccw device
  virtio-ccw: handle virtio 1 only devices
  s390x/flic: fail migration on source already
  s390x/kvm: detect some program check loops
  s390x/s390-virtio: get rid of DPRINTF

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-25 17:48:49 +00:00
Peter Maydell
d7941f4eed option cutils: Fix and clean up number conversions
-----BEGIN PGP SIGNATURE-----
 
 iQIcBAABAgAGBQJYrzrdAAoJEDhwtADrkYZTjJIP/0pjdtvdCdIq855DSgHO6jdm
 xM3W1DngCHt78LPXKutqRX8le2tFuY7uXQG2PqTXtirni5WKB9MB4wdOzeucrXvW
 NIpb8AC1GM0ToOwQvrc8T840QfdGFFE8X2DIAPPYcBivQWAlRi6tqbGY8KwvWFYC
 vSjUIJbIu8mOUt49Q5LfhaH2UPJlcNlED/oUmDLorz9Rz6E75EHP5pKPrr7Y0JkX
 4wjxSE18yM4z0wXwIfRiW5zKDs7JuvHwLSVX75ZwpS5GpWgGsyc4EeyAQb5Xi97s
 /S3a4SFj/kOvDeuw8uy7BkuPknYF2hHD6XUkoIWr2hiyjatjeAM9USO18N2HwoIg
 rO3Icw1HADC8Z/RrXjKecaLAA+gKNFGAMgmrYH0GImg6QfC24VQdHNXm3v+i1i27
 1p6AnSdtzG26DPk8pJODn2UbxngoeHUy0PPjra7ZEMDK7Igkw8x0oFBL887jPxkd
 oyBA5S4dOUCYXo86hYjjrhRXrQ7j5oIx45myX8jX6RoQLEq1XrUiVN59t1WXW5Vv
 EiZc7YB/QSPMZe/q0DiWvif4DH+YLAqD9k7OeQ2HjINBf5sKqIn3lGO0zloL2W+8
 mQ9+HLryP4j+PRbtpQavnAYg6Mo7Qzg9y5ZkABAj0fdyPJFZ8ZZ7gH0+atF1CjlP
 Vvv+VdyWpzKfFam752Hk
 =o8eA
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/armbru/tags/pull-util-2017-02-23' into staging

option cutils: Fix and clean up number conversions

# gpg: Signature made Thu 23 Feb 2017 19:41:17 GMT
# gpg:                using RSA key 0x3870B400EB918653
# gpg: Good signature from "Markus Armbruster <armbru@redhat.com>"
# gpg:                 aka "Markus Armbruster <armbru@pond.sub.org>"
# Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867  4E5F 3870 B400 EB91 8653

* remotes/armbru/tags/pull-util-2017-02-23: (24 commits)
  option: Fix checking of sizes for overflow and trailing crap
  util/cutils: Change qemu_strtosz*() from int64_t to uint64_t
  util/cutils: Return qemu_strtosz*() error and value separately
  util/cutils: Let qemu_strtosz*() optionally reject trailing crap
  qemu-img: Wrap cvtnum() around qemu_strtosz()
  test-cutils: Drop suffix from test_qemu_strtosz_simple()
  test-cutils: Use qemu_strtosz() more often
  util/cutils: Drop QEMU_STRTOSZ_DEFSUFFIX_* macros
  util/cutils: New qemu_strtosz()
  util/cutils: Rename qemu_strtosz() to qemu_strtosz_MiB()
  util/cutils: New qemu_strtosz_metric()
  test-cutils: Cover qemu_strtosz() around range limits
  test-cutils: Cover qemu_strtosz() with trailing crap
  test-cutils: Cover qemu_strtosz() invalid input
  test-cutils: Add missing qemu_strtosz()... endptr checks
  option: Fix to reject invalid and overflowing numbers
  util/cutils: Clean up control flow around qemu_strtol() a bit
  util/cutils: Clean up variable names around qemu_strtol()
  util/cutils: Rename qemu_strtoll(), qemu_strtoull()
  util/cutils: Rewrite documentation of qemu_strtol() & friends
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-24 18:34:27 +00:00
Alex Bennée
ca759f9e38 tcg: enable MTTCG by default for ARM on x86 hosts
This enables the multi-threaded system emulation by default for ARMv7
and ARMv8 guests using the x86_64 TCG backend. This is because on the
guest side:

  - The ARM translate.c/translate-64.c have been converted to
    - use MTTCG safe atomic primitives
    - emit the appropriate barrier ops
  - The ARM machine has been updated to
    - hold the BQL when modifying shared cross-vCPU state
    - defer powerctl changes to async safe work

All the host backends support the barrier and atomic primitives but
need to provide same-or-better support for normal load/store
operations.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Acked-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Pranith Kumar <bobby.prani@gmail.com>
Reviewed-by: Pranith Kumar <bobby.prani@gmail.com>
2017-02-24 10:32:46 +00:00
Alex Bennée
a67cf27727 target-arm: ensure all cross vCPUs TLB flushes complete
Previously flushes on other vCPUs would only get serviced when they
exited their TranslationBlocks. While this isn't overly problematic it
violates the semantics of TLB flush from the point of view of source
vCPU.

To solve this we call the cputlb *_all_cpus_synced() functions to do
the flushes which ensures all flushes are completed by the time the
vCPU next schedules its own work. As the TLB instructions are modelled
as CP writes the TB ends at this point meaning cpu->exit_request will
be checked before the next instruction is executed.

Deferring the work until the architectural sync point is a possible
future optimisation.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-24 10:32:46 +00:00
Alex Bennée
c22edfebff target-arm: don't generate WFE/YIELD calls for MTTCG
The WFE and YIELD instructions are really only hints and in TCG's case
they were useful to move the scheduling on from one vCPU to the next. In
the parallel context (MTTCG) this just causes an unnecessary cpu_exit
and contention of the BQL.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-24 10:32:46 +00:00
Alex Bennée
062ba099e0 target-arm/powerctl: defer cpu reset work to CPU context
When switching a new vCPU on we want to complete a bunch of the setup
work before we start scheduling the vCPU thread. To do this cleanly we
defer vCPU setup to async work which will run the vCPUs execution
context as the thread is woken up. The scheduling of the work will kick
the vCPU awake.

This avoids potential races in MTTCG system emulation.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-24 10:32:46 +00:00
Alex Bennée
0336cbf853 cputlb and arm/sparc targets: convert mmuidx flushes from varg to bitmap
While the vargs approach was flexible the original MTTCG ended up
having munge the bits to a bitmap so the data could be used in
deferred work helpers. Instead of hiding that in cputlb we push the
change to the API to make it take a bitmap of MMU indexes instead.

For ARM some the resulting flushes end up being quite long so to aid
readability I've tended to move the index shifting to a new line so
all the bits being or-ed together line up nicely, for example:

    tlb_flush_page_by_mmuidx(other_cs, pageaddr,
                             (1 << ARMMMUIdx_S1SE1) |
                             (1 << ARMMMUIdx_S1SE0));

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
[AT: SPARC parts only]
Reviewed-by: Artyom Tarasenko <atar4qemu@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
[PM: ARM parts only]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-24 10:32:46 +00:00
Jan Kiszka
8d04fb55de tcg: drop global lock during TCG code execution
This finally allows TCG to benefit from the iothread introduction: Drop
the global mutex while running pure TCG CPU code. Reacquire the lock
when entering MMIO or PIO emulation, or when leaving the TCG loop.

We have to revert a few optimization for the current TCG threading
model, namely kicking the TCG thread in qemu_mutex_lock_iothread and not
kicking it in qemu_cpu_kick. We also need to disable RAM block
reordering until we have a more efficient locking mechanism at hand.

Still, a Linux x86 UP guest and my Musicpal ARM model boot fine here.
These numbers demonstrate where we gain something:

20338 jan       20   0  331m  75m 6904 R   99  0.9   0:50.95 qemu-system-arm
20337 jan       20   0  331m  75m 6904 S   20  0.9   0:26.50 qemu-system-arm

The guest CPU was fully loaded, but the iothread could still run mostly
independent on a second core. Without the patch we don't get beyond

32206 jan       20   0  330m  73m 7036 R   82  0.9   1:06.00 qemu-system-arm
32204 jan       20   0  330m  73m 7036 S   21  0.9   0:17.03 qemu-system-arm

We don't benefit significantly, though, when the guest is not fully
loading a host CPU.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Message-Id: <1439220437-23957-10-git-send-email-fred.konrad@greensocs.com>
[FK: Rebase, fix qemu_devices_reset deadlock, rm address_space_* mutex]
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
[EGC: fixed iothread lock for cpu-exec IRQ handling]
Signed-off-by: Emilio G. Cota <cota@braap.org>
[AJB: -smp single-threaded fix, clean commit msg, BQL fixes]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Pranith Kumar <bobby.prani@gmail.com>
[PM: target-arm changes]
Acked-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-24 10:32:45 +00:00
Peter Maydell
5522924718 ppc patch queue for 2017-02-22
This pull request has:
    * Yet more POWER9 instruction implementations
    * Some extensions to the softfloat code which are necesssary for
      some of those instructions
    * Some preliminary patches in preparation for POWER9 softmmu
      implementation
    * Igor Mammedov's cleanups to unify hotplug cpu handling across
      architectures
    * Assorted bugfixes
 
 The softfloat and cpu hotplug changes aren't entirely ppc specific (in
 fact the hotplug stuff contains some pc specific patches).  However
 they're included here because ppc is one of the main beneficiaries,
 and the series depend on some ppc specific patches.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2
 
 iQIcBAABCAAGBQJYrS/bAAoJEGw4ysog2bOSJ3MP/AjoTGTP5MHPwWBZKAxpEtie
 vEXudVbOelr3QV06vHMH4YHVAncuzt9Hmz/RDgs5Uynp4vLdmEo5IdiFP9PMjrFg
 oMAndku9icU8PG+XNF5pNrKy10n6k8dVRBR/19UxnRWMuxywOZO208WkICF/6kDK
 IpFT96MubqbReLcVhdl2N8d2rP7/lRQmz6aPxhRLFBuAe8iheAQLq/QeZLIZaWEJ
 i4mPWVu/CDYP9nMAgv56MW0yY5p2o5MCh+f80+7jvKXZBoeo83KOTaZeZbGb/byr
 rCfyLTR24tj6WUGRvzyB+FJ8rbWKcox4UCx17239gAjXtLxhlYaQDo28S5gwinpQ
 b/CaEgb8x2kl97tZT/M1mamr7PdFxachCA20oizguwFJ9oeukAPUvkVBpEtVYK8K
 a+VrRHxVJwSi/ZD3N6WRZMXR4D+Oc8DcXoEzMu4CFtIzQ/WJroZCa4JCcdv4N1nw
 9u1m+C2QbQ9sGBtTSGCy0KZyT3sZHoFT6aD4zpkV7s3BJKk+AXSLRpL4z8FP2sDB
 Wh/Qk5q06P1pPZzvuU9QJmrpIE9EFcOQW4IQhyViut+BXzBlp7cWxeGcPM5PuJ7V
 6FcMSchZeVOiLi9Y51csluDrecTKIQ3yFEgLW7j50Lg/WqmdwlwkcW39MzlWgjgQ
 OIoVgvGmGovPTGIIYyY9
 =bsJJ
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-2.9-20170222' into staging

ppc patch queue for 2017-02-22

This pull request has:
   * Yet more POWER9 instruction implementations
   * Some extensions to the softfloat code which are necesssary for
     some of those instructions
   * Some preliminary patches in preparation for POWER9 softmmu
     implementation
   * Igor Mammedov's cleanups to unify hotplug cpu handling across
     architectures
   * Assorted bugfixes

The softfloat and cpu hotplug changes aren't entirely ppc specific (in
fact the hotplug stuff contains some pc specific patches).  However
they're included here because ppc is one of the main beneficiaries,
and the series depend on some ppc specific patches.

# gpg: Signature made Wed 22 Feb 2017 06:29:47 GMT
# gpg:                using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg:                 aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg:                 aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg:                 aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E  87DC 6C38 CACA 20D9 B392

* remotes/dgibson/tags/ppc-for-2.9-20170222: (43 commits)
  hw/ppc/ppc405_uc.c: Avoid integer overflows
  hw/ppc/spapr: Check for valid page size when hot plugging memory
  target-ppc: fix Book-E TLB matching
  hw/net/spapr_llan: 6 byte mac address device tree entry
  machine: replace query_hotpluggable_cpus() callback with has_hotpluggable_cpus flag
  machine: unify [pc_|spapr_]query_hotpluggable_cpus() callbacks
  spapr: reuse machine->possible_cpus instead of cores[]
  change CPUArchId.cpu type to Object*
  pc: pass apic_id to pc_find_cpu_slot() directly so lookup could be done without CPU object
  pc: calculate topology only once when possible_cpus is initialised
  pc: move pcms->possible_cpus init out of pc_cpus_init()
  machine: move possible_cpus to MachineState
  hw/pci-host/prep: Do not use hw_error() in realize function
  target/ppc/POWER9: Direct all instr and data storage interrupts to the hypv
  target/ppc/POWER9: Adapt LPCR handling for POWER9
  target/ppc/POWER9: Add ISAv3.00 MMU definition
  target/ppc: Fix LPCR DPFD mask define
  target-ppc: Add xscvqpudz and xscvqpuwz instructions
  target-ppc: Implement round to odd variants of quad FP instructions
  softfloat: Add float128_to_uint32_round_to_zero()
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-24 10:13:57 +00:00
Christian Borntraeger
f738f296ea s390x/arch_dump: pass cpuid into notes sections
we need to pass the cpuid into the pid field of the notes
section, otherwise the notes for different CPUs all have 0:

e.g. objdump -h shows:
old:
  5 .reg-s390-prefix/0 00000004  0000000000000000  0000000000000000
  6 .reg-s390-prefix 00000004  0000000000000000  0000000000000000
 21 .reg-s390-prefix/0 00000004  0000000000000000  0000000000000000
new:
  5 .reg-s390-prefix/1 00000004  0000000000000000  0000000000000000
  6 .reg-s390-prefix 00000004  0000000000000000  0000000000000000
 21 .reg-s390-prefix/2 00000004  0000000000000000  0000000000000000

Reported-by: Philipp Rudo <prudo@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
2017-02-24 10:15:18 +01:00
Christian Borntraeger
5f706fdc16 s390x/arch_dump: use proper note name and note size
In binutils/libbfd (bfd/elf.c) it is enforced that all s390
specific ELF notes like e.g. NT_S390_PREFIX or NT_S390_CTRS
have "LINUX" specified as note name and that the namesz is
6. Otherwise the notes are ignored.

QEMU currently uses "CORE" for these notes. Up to now this has
not been a real problem because the dump analysis tool "crash"
does handle that. But it will break all programs that use libbfd
for processing ELF notes.

So fix this and use "LINUX" for all s390 specific notes to comply
with libbfd. Also set the correct namesz.

Reported-by: Philipp Rudo <prudo@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
2017-02-24 10:15:18 +01:00
Christian Borntraeger
409422cd83 s390x/kvm: detect some program check loops
Sometimes (e.g. early boot) a guest is broken in such ways that it loops
100% delivering operation exceptions (illegal operation) but the pgm new
PSW is not set properly. This will result in code being read from
address zero, which usually contains another illegal op. Let's detect
this case and put the guest in crashed state. Instead of only detecting
this for address zero apply a heuristic that will work for any program
check new psw so that it will also reach the crashed state if you
provide some random elf file to the -kernel option.
We do not want guest problem state to be able to trigger a guest panic,
e.g. by faulting on an address that is the same as the program check
new PSW, so we check for the problem state bit being off.

With this we
a: get rid of CPU consumption of such broken guests
b: keep the program old PSW. This allows to find out the original illegal
   operation - making debugging such early boot issues much easier than
   with single stepping

This relies on the kernel using a similar heuristic and passing such
operation exceptions to user space.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
2017-02-24 10:15:18 +01:00
Markus Armbruster
f46bfdbfc8 util/cutils: Change qemu_strtosz*() from int64_t to uint64_t
This will permit its use in parse_option_size().

Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com> (maintainer:X86)
Cc: Kevin Wolf <kwolf@redhat.com> (supporter:Block layer core)
Cc: Max Reitz <mreitz@redhat.com> (supporter:Block layer core)
Cc: qemu-block@nongnu.org (open list:Block layer core)
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <1487708048-2131-24-git-send-email-armbru@redhat.com>
2017-02-23 20:35:36 +01:00
Markus Armbruster
f17fd4fdf0 util/cutils: Return qemu_strtosz*() error and value separately
This makes qemu_strtosz(), qemu_strtosz_mebi() and
qemu_strtosz_metric() similar to qemu_strtoi64(), except negative
values are rejected.

Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com> (maintainer:X86)
Cc: Kevin Wolf <kwolf@redhat.com> (supporter:Block layer core)
Cc: Max Reitz <mreitz@redhat.com> (supporter:Block layer core)
Cc: qemu-block@nongnu.org (open list:Block layer core)
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <1487708048-2131-23-git-send-email-armbru@redhat.com>
2017-02-23 20:35:36 +01:00
Markus Armbruster
4fcdf65ae2 util/cutils: Let qemu_strtosz*() optionally reject trailing crap
Change the qemu_strtosz() & friends to return -EINVAL when @endptr is
null and the conversion doesn't consume the string completely.
Matches how qemu_strtol() & friends work.

Only test_qemu_strtosz_simple() passes a null @endptr.  No functional
change there, because its conversion consumes the string.

Simplify callers that use @endptr only to fail when it doesn't point
to '\0' to pass a null @endptr instead.

Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com> (maintainer:X86)
Cc: Kevin Wolf <kwolf@redhat.com> (supporter:Block layer core)
Cc: Max Reitz <mreitz@redhat.com> (supporter:Block layer core)
Cc: qemu-block@nongnu.org (open list:Block layer core)
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <1487708048-2131-22-git-send-email-armbru@redhat.com>
2017-02-23 20:35:36 +01:00
Markus Armbruster
d2734d2629 util/cutils: New qemu_strtosz_metric()
To parse numbers with metric suffixes, we use

    qemu_strtosz_suffix_unit(nptr, &eptr, QEMU_STRTOSZ_DEFSUFFIX_B, 1000)

Capture this in a new function for legibility:

    qemu_strtosz_metric(nptr, &eptr)

Replace test_qemu_strtosz_suffix_unit() by test_qemu_strtosz_metric().

Rename qemu_strtosz_suffix_unit() to do_strtosz() and give it internal
linkage.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <1487708048-2131-15-git-send-email-armbru@redhat.com>
2017-02-23 20:35:36 +01:00
Max Filippov
cb3825b9af target/xtensa: add two missing headers to core import script
Include qemu/osdep.h and qemu-common.h at the beginning of imported
xtensa core source file.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2017-02-23 10:50:56 -08:00
Max Filippov
b68755c142 target/xtensa: sim: instantiate local memories
Xtensa core may have a number of RAM and ROM areas configured. Record
their size and location from the core configuration overlay and
instantiate them as RAM regions in the SIM machine.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2017-02-23 10:30:41 -08:00
Peter Maydell
10f25e4844 MIPS patches 2017-02-22
Changes:
 * Add MIPS Boston board support
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.5 (GNU/Linux)
 
 iQIVAwUAWKzWYCI464bV95fCAQLAvA//bCOMN1dcTA9CZHPtHVR0zm7Mm83ree6c
 QglgHh3S0mGUbC3MqzO44hsd851QWigAt0zjI7xn4/rtSPS82Bn3+EXDXt+l3+Ll
 A58rx3Yjkz9XlLrhTSXlt0Pfltc6G7f1Migmq6scQsJflf5YLXQVyXVuhAwhiFSm
 4U0UYXSMtRhU6PtgcjNu+qGn+1qn3DlaiwZcKMUh1X2f5PeCbzIJ47ju4hmFoQUi
 qARYU1Ia5dnHUi/8Df1dL+XJo3tafVdmIjfYP0SOuyRVxCs6GIAylV5h92HHgF8w
 rTmTlnD4XU2Ef+x4oBvejNLwL0mvW/pYo+VMWuV96kXUxRU5KeDjaQk1tULpxZcu
 xcKAqN6xcWllO2v1YomHSkzudby1FPECDPNsLkbvaGBG6mIOPgoAUyHZQT7MxWXN
 dhXa3cV770FYck6X4QU2LD9kSJ+8L0ZXGQj0EheQwU6ofJ66DW0//pzpVgAdXjJ4
 BssXKdEnHlkpWJw/PWCxt2fUPKzmgRQ9Q6kR5PuzWHNXXBMzoc+ztOR5NfrcrUUT
 EpAEHLjoEjzT2wIwUmZccszZiLVT1fe5dGwCI+OgIUTkeq0tQMRL5nVuNufOiFgc
 9KpfujUA4LlvgdEEA9HhEfhTQLjM4O0xI82mXN+R68w3mvsJ3ygKl6rd+7XNmmwx
 nRvHWith+EI=
 =sAzw
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/yongbok/tags/mips-20170222' into staging

MIPS patches 2017-02-22

Changes:
* Add MIPS Boston board support

# gpg: Signature made Wed 22 Feb 2017 00:08:00 GMT
# gpg:                using RSA key 0x2238EB86D5F797C2
# gpg: Good signature from "Yongbok Kim <yongbok.kim@imgtec.com>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg:          It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 8600 4CF5 3415 A5D9 4CFA  2B5C 2238 EB86 D5F7 97C2

* remotes/yongbok/tags/mips-20170222:
  hw/mips: MIPS Boston board support
  hw: xilinx-pcie: Add support for Xilinx AXI PCIe Controller
  loader: Support Flattened Image Trees (FIT images)
  dtc: Update requirement to v1.4.2
  target-mips: Provide function to test if a CPU supports an ISA
  hw/mips_gic: Update pin state on mask changes
  hw/mips_gictimer: provide API for retrieving frequency
  hw/mips_cmgcr: allow GCR base to be moved

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-23 09:59:40 +00:00
Thomas Huth
df58713396 hw/ppc/spapr: Check for valid page size when hot plugging memory
On POWER, the valid page sizes that the guest can use are bound
to the CPU and not to the memory region. QEMU already has some
fancy logic to find out the right maximum memory size to tell
it to the guest during boot (see getrampagesize() in the file
target/ppc/kvm.c for more information).
However, once we're booted and the guest is using huge pages
already, it is currently still possible to hot-plug memory regions
that does not support huge pages - which of course does not work
on POWER, since the guest thinks that it is possible to use huge
pages everywhere. The KVM_RUN ioctl will then abort with -EFAULT,
QEMU spills out a not very helpful error message together with
a register dump and the user is annoyed that the VM unexpectedly
died.
To avoid this situation, we should check the page size of hot-plugged
DIMMs to see whether it is possible to use it in the current VM.
If it does not fit, we can print out a better error message and
refuse to add it, so that the VM does not die unexpectely and the
user has a second chance to plug a DIMM with a matching memory
backend instead.

Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1419466
Signed-off-by: Thomas Huth <thuth@redhat.com>
[dwg: Fix a build error on 32-bit builds with KVM]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-02-22 14:28:53 +11:00
Alex Zuepke
0a4c774086 target-ppc: fix Book-E TLB matching
The Book-E TLB matching process should bail out early when a TLB
entry matches, but the access permissions are wrong. The CPU
will then raise a DSI error instead of a Data TLB error, as
described for TLB matching in Freescale and IBM documents.

Signed-off-by: Alex Zuepke <azu@sysgo.de>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-02-22 14:28:53 +11:00
Suraj Jitindar Singh
5065908361 target/ppc/POWER9: Direct all instr and data storage interrupts to the hypv
The vpm0 bit was removed from the LPCR in POWER9, this bit controlled
whether ISI and DSI interrupts were directed to the hypervisor or the
partition. These interrupts now go to the hypervisor irrespective, thus
it is no longer necessary to check the vmp0 bit in the LPCR.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-02-22 11:28:28 +11:00
Suraj Jitindar Singh
18aa49ecf4 target/ppc/POWER9: Adapt LPCR handling for POWER9
The logical partitioning control register controls a threads operation
based on the partition it is currently executing. Add new definitions and
update the mask used when writing to the LPCR based on the POWER9 spec.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-02-22 11:28:28 +11:00
Suraj Jitindar Singh
86cf1e9fe8 target/ppc/POWER9: Add ISAv3.00 MMU definition
POWER9 processors implement the mmu as defined in version 3.00 of the ISA.

Add a definition for this mmu model and set the POWER9 cpu model to use
this mmu model.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-02-22 11:28:28 +11:00
Suraj Jitindar Singh
7659ca1a3e target/ppc: Fix LPCR DPFD mask define
The DPFD field in the LPCR is 3 bits wide. This has always been defined
as 0x3 << shift which indicates a 2 bit field, which is incorrect.
Correct this.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-02-22 11:28:28 +11:00
Bharata B Rao
e0aee726bf target-ppc: Add xscvqpudz and xscvqpuwz instructions
xscvqpudz: VSX Scalar truncate & Convert Quad-Precision format to
           Unsigned Doubleword format
xscvqpuwz: VSX Scalar truncate & Convert Quad-Precision format to
           Unsigned Word format

Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-02-22 11:28:28 +11:00
Bharata B Rao
a8d411abac target-ppc: Implement round to odd variants of quad FP instructions
xsaddqpo:  VSX Scalar Add Quad-Precision using round to Odd
xsmulqo:   VSX Scalar Multiply Quad-Precision using round to Odd
xsdivqpo:  VSX Scalar Divide Quad-Precision using round to Odd
xscvqpdpo: VSX Scalar round & Convert Quad-Precision format to
           Double-Precision format using round to Odd
xssqrtqpo: VSX Scalar Square Root Quad-Precision using round to Odd
xssubqpo:  VSX Scalar Subtract Quad-Precision using round to Odd

In addition, fix the invalid bitmask in the instruction encoding
of xssqrtqp[o].

Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
CC: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-02-22 11:28:28 +11:00
Nikunj A Dadhania
c09cec683b target-ppc: add wait instruction
Use the available wait instruction implementation.

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-02-22 11:28:28 +11:00
Nikunj A Dadhania
62d897ca8b target-ppc: add slbsync implementation
slbsync: SLB Synchoronize

The instruction provides an ordering function for the effects of all
slbieg instructions executed by the thread executing the slbsync
instruction.

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-02-22 11:28:28 +11:00
Nikunj A Dadhania
a63f1dfc62 target-ppc: add slbieg instruction
slbieg: SLB Invalidate Entry Global

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-02-22 11:28:28 +11:00
Nikunj A Dadhania
80b8c1ee05 target-ppc: generate exception for copy/paste
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-02-22 11:28:28 +11:00