Remove support for versions of the CPU state prior to 11
which is the version used in qemu 0.12 - you'd be pretty
lucky if you got a migration stream to work from anything
that old anyway. This doesn't affect the machine type
definition in any way.
My main reason for doing this is the hack for sysenter_esp/eip
that uses .get/.put's in state versions less than 7 (that's
prior to somewhere before 0.10).
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20170405190024.27581-4-dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Convert the fpreg save/restore to use VMSTATE_ macros rather than
.get/.put.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20170405190024.27581-3-dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Long long ago, we used to support storing the x86 FP registers in
a 64bit format.
Then c31da136a0 in v0.14-rc0 removed
the last support for writing that in the migration format.
Even before that, it was only used if you had softfloat disabled
(i.e. !USE_X86LDOUBLE) so in practice use of it in even earlier
qemu is unlikely for most users.
Kill it off, it's complicated, and possibly broken.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20170405190024.27581-2-dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
it will allow switching from cpu_index to property based
numa mapping in follow up patches.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Message-Id: <1494415802-227633-5-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
it will allow switching from cpu_index to property based
numa mapping in follow up patches.
PS:
patch changes default value of CPUState::numa_node from 0
to CPU_UNSET_NUMA_NODE_ID. The only place for x86 that
would affected is monitor's 'infor numa' command which
uses that field. However legacy 0 value is still preserved
by pc_cpu_pre_plug() in this patch if user/numa.c hasn't
set it explicitly, so there is no change in behavior.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <1494415802-227633-4-git-send-email-imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <1493816238-33120-3-git-send-email-imammedo@redhat.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Change the nested if statements into a flat format, to make
it clearer what validation / capping is being performed on
different CPUID index values.
NB this changes behaviour when "index > env->cpuid_xlevel2".
This won't have any guest-visible effect because no there is
no CPUID[0xC0000001] feature supported by TCG, and KVM code
will never call cpu_x86_cpuid() with such an index value.
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Message-Id: <20170509132736.10071-2-berrange@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
When running with KVM, we update the "family" CPU alias to point
to the right host CPU type, so that it for example possible to
use "-cpu POWER8" on a POWER8NVL host. However, the function for
printing the list of available CPU models is called earlier than
the KVM setup code, so the output of "-cpu help" is wrong in that
case. Since it would be somewhat ugly anyway to have different
help texts depending on whether "-enable-kvm" has been specified
or not, we should better always print the same text, so fix this
issue by printing "alias for preferred XXX CPU" instead.
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
POWER9 DD1 silicon has some bugs which mean it a) isn't really compliant
with the ISA v3.00 and b) require a number of special workarounds in the
kernel.
At the moment, qemu isn't aware of DD1. For TCG we don't really want it to
be (why bother emulating buggy silicon). But with KVM, the guest does need
to be aware of DD1 so it can apply the necessary workarounds.
Meanwhile, the feature negotiation between qemu and the guest strongly
favours architected compatibility modes to "raw" CPU modes. In combination
with the above, this means the guest sees architected POWER9 mode, and
doesn't apply the DD1 workarounds. Well, unless it has yet another
workaround to partially ignore what qemu tells it.
This patch addresses this by disabling support for compatibility modes when
using KVM on a POWER9 DD1 host.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
ISA V3.00 introduced a new radix mmu model. Implement the page fault
handler for this so we can run a tcg guest in radix mode and perform
address translation correctly.
In real mode (mmu turned off) addresses are masked to remove the top
4 bits and then are subject to partition scoped translation, since we only
support pseries at this stage it is only necessary to perform the masking
and then we're done.
In virtual mode (mmu turned on) address translation if performed as
follows:
1. Use the quadrant to determine the fully qualified address.
The fully qualified address is defined as the combination of the effective
address, the effective logical partition id (LPID) and the effective
process id (PID). Based on the quadrant (EA63:62) we set the pid and lpid
like so:
quadrant 0: lpid = LPIDR, pid = PIDR
quadrant 1: HV only (not allowed in pseries)
quadrant 2: HV only (not allowed in pseries)
quadrant 3: lpid = LPIDR, pid = 0
If we can't get the fully qualified address we raise a segment interrupt.
2. Find the guest radix tree
We ask the virtual hypervisor for the partition table which was registered
with H_REGISTER_PROC_TBL which points us to the process table in guest
memory. We then index this table by pid to get the process table entry
which points us to the appropriate radix tree to translate the address.
If the process table isn't big enough to contain an entry for the current
pid then we raise a storage interrupt.
3. Walk the radix tree
Next we walk the radix tree where each level is a table of page directory
entries indexed by some number of bits from the effective address, where
the number of bits is determined by the table size. We continue to walk
the tree (while entries are valid and the table is of minimum size) until
we reach a table of page table entries, indicated by having the leaf bit
set. The appropriate pte is then checked for sufficient access permissions,
the reference and change bits are updated and the real address is
calculated from the real page number bits of the pte and the low bits of
the effective address.
If we can't find an entry or can't access the entry bacause of permissions
then we raise a storage interrupt.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
[dwg: Add missing parentheses to macro]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The tlbie[l] instructions are used to invalidate TLB entries used to cache
address translations.
In ISAv3.00 (POWER9) more fields were added to the tblie[l] instructions
which were previously invalid. We don't care about any of these new fields
since we just invalidate the whole world anyway but we need to not
cause an illegal instruction exception when the instructions are called.
We also don't want to allow an older processor to have these fields set
since that would be invalid.
Add a new GEN_HANDLER for the ISAv3 instructions with the correct invalid
mask. These will only be generated to a POWER9 processor for now based on
the instruction flag. Also remove the PPC_MEM_TLBIE instruction flag from
the POWER9 processor definition to ensure the old tlbie isn't generated.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The Guest Translation Shootdown Enable (GTSE) bit in the Logical Partition
Control Register (LPCR) can be set to enable a guest to use the tlbie
instruction directly to invalidate translations.
When the GTSE bit is set then the tlbie instruction is supervisor
privileged, otherwise it is hypervisor privileged.
Add a guest translation shootdown enable (gtse) field to the diassembly
context and use this to check the correct privilege level at code
generation time.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
In case when atomic operation is not supported, exit_atomic is called
and we stop the world and execute the atomic operation. This results
in a following call chain:
tcg_gen_atomic_cmpxchg_tl()
-> gen_helper_exit_atomic()
-> HELPER(exit_atomic)
-> cpu_loop_exit_atomic() -> EXCP_ATOMIC
-> qemu_tcg_cpu_thread_fn() => case EXCP_ATOMIC
-> cpu_exec_step_atomic()
-> cpu_step_atomic()
-> cc->cpu_exec_enter() = ppc_cpu_exec_enter()
Sets env->reserve_addr = -1;
But by the time it return back, the reservation is erased and the code
fails, this continues forever and the lock is never taken.
Instead set this in powerpc_excp()
Now that ppc_cpu_exec_enter() doesn't have anything meaningful to do,
let us get rid of the function.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This enables the multi-threaded system emulation by default for PPC64
guests using the x86_64 TCG back-end.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Emulating LL/SC with cmpxchg is not correct, since it can suffer from
the ABA problem. However, portable parallel code is written assuming
only cmpxchg which means that in practice this is a viable alternative.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We now have macros in place to make it less verbose to add a scalar
to QDict and QList, so use them.
Patch created mechanically via:
spatch --sp-file scripts/coccinelle/qobject.cocci \
--macro-file scripts/cocci-macro-file.h --dir . --in-place
then touched up manually to fix a couple of '?:' back to original
spacing, as well as avoiding a long line in monitor.c.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20170427215821.19397-7-eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
but it'll come in the next pull request.
* use GDB XML register description for x86
* use _Static_assert in QEMU_BUILD_BUG_ON
* add "R:" to MAINTAINERS and get_maintainers
* checkpatch improvements
* dump threading fixes
* first part of vhost-user-scsi support
* QemuMutex tracing
* vmw_pvscsi and megasas fixes
* sgabios module update
* use Rev3 (ACPI 2.0) FADT
* deprecate -hdachs
* improve -accel documentation
* hax fix
* qemu-char GSource bugfix
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQEcBAABAgAGBQJZDE+gAAoJEL/70l94x66DIpYH/1IOz3u8ObD8D4Lor07LkCCZ
vWFnTBMgGi9gTL5JQDnukRR3cmNp9EVOtAP5Yf+v+/Xqyq/FNGnoVWxCxEby7LtN
zrIXbsKMCaEcGzRNJFcbKV+KZnzkJrz92J0NHy29ruCK1AsslOXAWf4Qb1MV+fQl
6w2Upsh35usvWCNpFm2o8arzMEmNuE2xJDPKUB11GMrZT6TExq4Zqa8Zj1Ihc0sX
XcDr+eeBmb65Vv3jQLntOhSWAy0Xxf/fDXYTQx+JLHFgvpSOIWMiS+fqIVXtT0bH
0E4hQrBr0qjes8n8+9WGGQW2k8Ak0QlDvrZnQ97hTeV1k6SxW+2ATO2mLeJp9TM=
=5hf2
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'bonzini/tags/for-upstream' into staging
A large set of small patches. I have not included yet vhost-user-scsi,
but it'll come in the next pull request.
* use GDB XML register description for x86
* use _Static_assert in QEMU_BUILD_BUG_ON
* add "R:" to MAINTAINERS and get_maintainers
* checkpatch improvements
* dump threading fixes
* first part of vhost-user-scsi support
* QemuMutex tracing
* vmw_pvscsi and megasas fixes
* sgabios module update
* use Rev3 (ACPI 2.0) FADT
* deprecate -hdachs
* improve -accel documentation
* hax fix
* qemu-char GSource bugfix
# gpg: Signature made Fri 05 May 2017 06:10:40 AM EDT
# gpg: using RSA key 0xBFFBD25F78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>"
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* bonzini/tags/for-upstream: (21 commits)
vhost-scsi: create a vhost-scsi-common abstraction
libvhost-user: replace vasprintf() to fix build
get_maintainer: add subsystem to reviewer output
get_maintainer: --r (list reviewer) is on by default
get_maintainer: it's '--pattern-depth', not '-pattern-depth'
get_maintainer: Teach get_maintainer.pl about the new "R:" tag
MAINTAINERS: Add "R:" tag for self-appointed reviewers
Fix the -accel parameter and the documentation for 'hax'
dump: Acquire BQL around vm_start() in dump thread
hax: Fix memory mapping de-duplication logic
checkpatch: Disallow glib asserts in main code
trace: add qemu mutex lock and unlock trace events
vmw_pvscsi: check message ring page count at initialisation
sgabios: update for "fix wrong video attrs for int 10h,ah==13h"
scsi: avoid an off-by-one error in megasas_mmio_write
vl: deprecate the "-hdachs" option
use _Static_assert in QEMU_BUILD_BUG_ON
target/i386: Add GDB XML register description support
char: Fix removing wrong GSource that be found by fd_in_tag
hw/i386: Build-time assertion on pc/q35 reset register being identical.
...
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJZCnjJAAoJEMOzHC1eZifkjegP/ieGXCOfRQxLkr7SzWvigjWx
HYUCqZ55Qt2nlirKV30Y7sSxuWnHKthgAvp/9/E/d1UGuFjvao9iia0Aet3elh4B
bq8BM3LTBTnwknHc2tgHaNyP7VHsNZkCmMsESSEO6NexjnmXIoWxqbdAQ8FdYXpN
evzz2pa1uTLju1gu7gDe3gIUBJjqiIOTmsjIkzIj7v9IqHOYKdGlJQSnZ+AHbQJn
nRs+uqxN8sKaAILHmteXTEL1v1xhMJGKSY212m0OnUImlJrNgjAFGHKjSD4p8+6h
/k4msQXCjdNo5NKu/0S3N8MKYaWTdcHohe4fnevV2fgdUpljLLm0RBNwP0wWi8Gp
SZZ4GgeKGioCuqew1OdrhUNEQ+je3o4wdNYH243vVx3AIxXKS/EVIYhjNqDQLJ9M
HGD+zcjcplpUlZ9dOXgWXK6yff2GUORPZJw8BLnDeRxjJA0xTefaK3qA5gWqJXrY
HahUi0G4fJNZeROaBemcQ4+nPXfz55Ti4jp4Y3l5QqzvRidSZkdEoRfrnyMYP3/C
6RmR/iRQLjEGStKEqeqGMqhJ9Gn2aAkU+l+h4394fzS6CQulPOFZEkjobcAd2/5O
lxXilhQOrAVlW8OIQzuGfIbuLdSFh55vurq8bwrMi8leeJ/AIbColun8PnO5E6Zd
+1m4x+gT7IIv4QfMoerL
=zXGN
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'shorne/tags/pull-or-20170504' into staging
Openrisc Features and Fixes for qemu 2.10
# gpg: Signature made Thu 04 May 2017 01:41:45 AM BST
# gpg: using RSA key 0xC3B31C2D5E6627E4
# gpg: Good signature from "Stafford Horne <shorne@gmail.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: D9C4 7354 AEF8 6C10 3A25 EFF1 C3B3 1C2D 5E66 27E4
* shorne/tags/pull-or-20170504:
target/openrisc: Support non-busy idle state using PMR SPR
target/openrisc: Remove duplicate features property
target/openrisc: Implement full vmstate serialization
migration: Add VMSTATE_STRUCT_2DARRAY()
target/openrisc: implement shadow registers
migration: Add VMSTATE_UINTTL_2DARRAY()
target/openrisc: add numcores and coreid support
target/openrisc: Fixes for memory debugging
target/openrisc: Implement EPH bit
target/openrisc: Implement EVBAR register
MAINTAINERS: Add myself as openrisc maintainer
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
hax_update_mapping() avoids unnecessary and potentially expensive
calls to HAX_VM_IOCTL_SET_RAM by computing the net result (i.e.
effective mapping changes) of each MemoryRegion transaction, with
the help of a linked list of HAXMapping objects.
However, when processing a new mapping that overlaps with an
existing mapping in the list, it fails to handle the case where the
start address of the new mapping is above that of the existing
mapping in the guest physical address space. This happens when QEMU
is launched with "-machine q35 -enable-hax", which involves the
following MemoryRegion transaction for digging the VGA hole:
region_del: 0x00000000->0x08000000 VA 05fa0000 ('pc.ram')
region_add: 0x00000000->0x000a0000 VA 05fa0000 ('pc.ram')
region_add: 0x000a0000->0x000c0000 VA 00000000 ('vga-lowmem')
region_add: 0x000c0000->0x08000000 VA 06060000 ('pc.ram')
where the third MemoryRegion is MMIO and is ignored. The current
de-duplication logic handles the last MemoryRegion incorrectly and
produces the following result:
hax_mapping_dump_list updates:
+ 0x000c0000->0x08000000 VA 0x06060000
- 0x07fe0000->0x08000000 VA 0x0df80000
which is why VGA emulation does not work for Q35.
With this patch, one can see VGA output as Q35 boots up. Note that
Q35 support also requires a change to HAXM kernel module, which is
not available in the current HAXM release (6.1.2).
+ Add a warning if the input MemoryRegion is a ROM device, which is
not supported by HAXM kernel module at this time.
Signed-off-by: Yu Ning <yu.ning@linux.intel.com>
Message-Id: <20170428072723.7036-1-yu.ning@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch implements XML target description support for X86 and X86-64
architectures in the GDB stub, as the way with ARM and PowerPC:
- gdb-xml/32bit-core.xml & gdb-xml/64bit-core.xml: Adding the XML target
description files, these files are picked from GDB source code.
- configure: Define gdb_xml_files for X86 targets.
- target/i386/cpu.c: Define gdb_core_xml_file and gdb_arch_name to add
XML awareness for this architecture, modify the gdb_num_core_regs to
fit the registers number defined in each XML file.
Signed-off-by: Abdallah Bouassida <abdallah.bouassida@lauterbach.com>
Message-Id: <2b3c8119-1602-28c7-eab4-296593877103@lauterbach.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The OpenRISC architecture has the Power Management Register (PMR)
special purpose register to manage cpu power states. The interesting
modes are:
* Doze Mode (DME) - Stop cpu except timer & pic - wake on interrupt
* Sleep Mode (SME) - Stop cpu and all units - wake on interrupt
* Suspend Model (SUME) - Stop cpu and all units - wake on reset
The linux kernel will set DME when idle.
This patch implements the PMR SPR and halts the qemu cpu when there is a
change to DME or SME. This means that openrisc qemu in no longer peggs
a host cpu at 100%.
In order for this to work we need to kick the CPU when timers are
expired. Update the cpu timer to kick the cpu upon each timer event.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Stafford Horne <shorne@gmail.com>
The features property has stored the exact same thing as the cpucfgr
spr. Remove the feature enum and property as it is not needed.
In order to preserve the behavior or keeping features accross reset this
patch moves cpucfgr into the non reset region of the state struct. Since
the cpucfgr is read only this means we only need to sset cpucfgr once
during class init.
Signed-off-by: Stafford Horne <shorne@gmail.com>
Previously serialization did not persist the tlb, timer, pic and other
key state items. This meant snapshotting and restoring a running os
would crash. After adding these I am able to take snapshots of a
running linux os and restore at a later time.
I am currently not trying to maintain capatibility with older versions
as I do not believe this really worked before or anyone used it.
Signed-off-by: Stafford Horne <shorne@gmail.com>
Shadow registers are part of the openrisc spec along with sr[cid], as
part of the fast context switching feature. When exceptions occur,
instead of having to save registers to the stack if enabled the CID will
increment and a new set of registers will be available.
This patch only implements shadow registers which can be used as extra
scratch registers via the mfspr and mtspr if required. This is
implemented in a way where it would be easy to add on the fast context
switching, currently cid is hardcoded to 0.
This is need for openrisc linux smp kernels to boot correctly.
Signed-off-by: Stafford Horne <shorne@gmail.com>
These are used to identify the processor in SMP system. Their
definition has been defined in verilog cores but it not yet part of the
spec but it will be soon.
The proposal for this is available:
https://openrisc.io/proposals/core-identifier-and-number-of-cores
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Stafford Horne <shorne@gmail.com>
When debugging in gdb you might want to inspect instructions in mapped
pages or in exception vectors like 0x800 etc. This was previously not
possible in qemu since the *get_phys_page_debug() routine only looked
into the data tlb.
Change to fall back to look into instruction tlb and plain physical
pages.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Stafford Horne <shorne@gmail.com>
This makes a small step fixing one of many style problems that exist in
the older ppc code. This removes spaces between function (or macro) name
and the following '('.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This patch registers mfspr 259 for Book3S and e500 family cores
following this research:
mfspr 259 provides read-only mapped user access to SPRG3(SPR 275) according to:
- PowerISA 2.02, Book III (documents implementation starting with POWER4+ @ p20)
- IBM PowerPC 970MP RISC Microprocessor User's Manual v2.1, page 48
- Amit Singh: "Mac OS X Internals: A Systems Approach" on 970 and 970FX cores:
He demonstrates mfspr 259 reading TLS data from Mac OS X on G5 on page 588
- NXP documents it in the Core Reference Manuals of: e500, e500mc and e5500
- getcpu() of the 32 & 64-bit Book3S Linux vDSOs use it to read the core number
mfspr 259 does not appear to be implemented in these cores according to:
- 74xx series: MPC7410/MPC7400 and MPC7450 RISC Microprocessor Reference Manuals
- 4xx series: PPC440 Processor User's Manual, Revision 1.09 by AMCC
- 750 series: IBM PowerPC 750CL RISC Microprocessor User's Manual
- e200 series: e200z4 Power Architectureâ Core Reference Manual
Implementation: gen_spr_usprg3() is called from init_proc_book3s_common()
(covers the 970 and POWER cores) and init_proc_e500() (covers the e500 family)
to register spr_read_ureg() in the same way which it already provides
the mapped SPR access for SPR_USPRG4-7 in gen_spr_usprgh() for cores
which have the same read-only mapped SPRG register access for SPRG4-7.
Verified using Linux by pinning a thread to a core and checking sched_getcpu()
using qemu-system-ppc64 -M pseries -cpu POWER8 using MTTCG on a x86_64 host.
Signed-off-by: Bernhard Kaindl <bernhard.kaindl@thalesgroup.com>
Reviewed-by: Stefan Resch <stefan.resch@thalesgroup.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The PIDR (process id register) is used to store the id of the currently
running process, which is used to select the process table entry used to
perform address translation. This means that when we write to this register
all the translations in the TLB become outdated as they are for a
previously running process. Thus when this register is written to we need
to invalidate the TLB entries to ensure stale entries aren't used to
to perform translation for the new process, which would result in at best
segfaults or alternatively just random memory being accessed.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
[dwg: Fixed compile error for 32-bit targets]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
gdb refuses to parse QEMU memory dumps because struct PPCElfPrstatus
is the wrong size. Fix it.
Signed-off-by: Anton Blanchard <anton@samba.org>
Fixes: e62fbc54d4 ("target-ppc: dump-guest-memory support")
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Today, the ICPState array of the sPAPR machine is indexed with
'cpu_index' of the CPUState. This numbering of CPUs is internal to
QEMU and the guest only knows about what is exposed in the device
tree, that is the 'cpu_dt_id'. This is why sPAPR uses the helper
xics_get_cpu_index_by_dt_id() to do the mapping in a couple of places.
To provide a more generic XICS layer, we need to abstract the IRQ
'server' number and remove any assumption made on its nature. It
should not be used as a 'cpu_index' for lookups like xics_cpu_setup()
and xics_cpu_destroy() do.
To reach that goal, we choose to introduce a generic 'intc' backlink
under PowerPCCPU, and let the machine core init routine do the
ICPState lookup. The resulting object is passed on to xics_cpu_setup()
which does the store under PowerPCCPU. The IRQ 'server' number in XICS
is now generic. sPAPR uses 'cpu_dt_id' and PowerNV will use 'PIR'
number.
This also has the benefit of simplifying the sPAPR hcall routines
which do not need to do any ICPState lookups anymore.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The ibm,processor-radix-AP-encodings device tree property of the cpu node
is used to specify the radix mode supported page sizes of the processor
to the guest os. Contained in the top 3 bits of the msb is the actual
page size (AP) encoding associated with the corresponding radix mode
supported page size. Add this property for a TCG guest, note the TCG code
is capable of translating any format so just add the 4 default page sizes.
The ibm,processor-radix-AP-encodings device tree property is defined as:
One to n cells in ascending order of radix mode supported page sizes
encoded as BE ints (32bit on ppc) in the form:
0bxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
- 0bxxx -> AP encoding
- 0byyyyyyyyyyyyyyyyyyyyyyyyyyyyy -> supported page size encoded as a shift
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This enables in-kernel handling of H_PUT_TCE_INDIRECT and
H_STUFF_TCE hypercalls. The host kernel support is there since v4.6,
in particular d3695aa4f452
("KVM: PPC: Add support for multiple-TCE hcalls").
H_PUT_TCE is already accelerated and does not need any special enablement.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The H_REGISTER_PROCESS_TABLE H_CALL is used by a guest to indicate to the
hypervisor where in memory its process table is and how translation should
be performed using this process table.
Provide the implementation of this H_CALL for a guest.
We first check for invalid flags, then parse the flags to determine the
operation, and then check the other parameters for valid values based on
the operation (register new table/deregister table/maintain registration).
The process table is then stored in the appropriate location and registered
with the hypervisor (if running under KVM), and the LPCR_[UPRT/GTSE] bits
are updated as required.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
[dwg: Correct missing prototype and uninitialized variable]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Query and cache the value of two new KVM capabilities that indicate
KVM's support for new radix and hash modes of the MMU.
Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Use the new ioctl, KVM_PPC_GET_RMMU_INFO, to fetch radix MMU
information from KVM and present the page encodings in the device tree
under ibm,processor-radix-AP-encodings. This provides page size
information to the guest which is necessary for it to use radix mode.
Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
[dwg: Compile fix for 32-bit targets, style nit fix]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
KVM_CAP_SPAPR_TCE capability allows creating TCE tables in KVM which
allows having in-kernel acceleration for H_PUT_TCE_xxx hypercalls.
However it only supports 32bit DMA windows at zero bus offset.
There is a new KVM_CAP_SPAPR_TCE_64 capability which supports 64bit
window size, variable page size and bus offset.
This makes use of the new capability. The kernel headers are already
updated as the kernel support went in to v4.6.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
On Power8 hosts it is currently theoretically possible for QEMU/KVM-HV guests
to receive a ibm,pa-features property indicating that HTM support is available
when it is not. The situation would occur if the platform firmware of
a Power8 host cleared the HTM bit of the ibm,pa-features property.
QEMU would query KVM for the availability of HTM, which will return no
support, but workaround code in kvm_arch_init_vcpu() would then
re-enable it because KVM_HV is in use and the processor is P8.
This patch adjusts the workaround in kvm_arch_init_vcpu() so that it does not
enable HTM (in the above case) unless the host kernel indicates to the QEMU
process, via the auxiliary vector, that userspace can use HTM (via the HWCAP2
bit KVM_FEATURE2_HTM).
The reason to use the value from the auxiliary vector is that it is
set based only on what the host kernel found in the ibm,pa-features
HTM bit at boot time.
Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
s390_virtio_hypercall can trigger IO events and interrupts, most notably
when using virtio-ccw devices.
Reviewed-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Fixes: 278f5e98c6 ("s390x/misc_helper.c: wrap IO instructions in BQL")
Signed-off-by: Alexander Graf <agraf@suse.de>
According to "CPU Signaling and Response", "Signal-Processor Orders",
the order field is bit position 56-63. Without this, the Linux
guest kernel is sometimes unable to stop emulation and enters
an infinite loop of "XXX unknown sigp: 0xffffffff00000005".
Signed-off-by: Philipp Kern <phil@philkern.de>
Reviewed-by: Thomas Huth <thuth@tuxfamily.org>
[agraf: add comment according to email]
Signed-off-by: Alexander Graf <agraf@suse.de>
Exception Prefix High (EPH) control bit of the Supervision Register
(SR).
The significant bits (31-12) of the vector offset address for each
exception depend on the setting of the Supervision Register (SR)'s EPH
bit and the Exception Vector Base Address Register (EVBAR).
If SR[EPH] is set, the vector offset is logically ORed with the offset
0xF0000000.
This means if EPH is;
* 0 - Exceptions vectors start at EVBAR
* 1 - Exception vectors start at EVBAR | 0xF0000000
Signed-off-by: Tim 'mithro' Ansell <mithro@mithis.com>
Signed-off-by: Stafford Horne <shorne@gmail.com>
Exception Vector Base Address Register (EVBAR) - This optional register
can be used to apply an offset to the exception vector addresses.
The significant bits (31-12) of the vector offset address for each
exception depend on the setting of the Supervision Register (SR)'s EPH
bit and the Exception Vector Base Address Register (EVBAR).
Its presence is indicated by the EVBARP bit in the CPU Configuration
Register (CPUCFGR).
Signed-off-by: Tim 'mithro' Ansell <mithro@mithis.com>
Signed-off-by: Stafford Horne <shorne@gmail.com>
- the new compat machine
- several cleanups and optimizations
- introspection for css ids
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJY+bZ5AAoJEN7Pa5PG8C+vsfgQAJvMTeJo9iyGYe8t/Xs5x+8J
SHIifYCM+d7XMJr5izYcRJ173DxliFekXTmRW+0soZqoLraZ2p8AL/+VWHZlWeN0
AcYFzVduYbR2gub/HAgE4ZS46pjbuD4vTmOsLLz3FBn3J9QAiS9kGJ/eL/kZuHmY
tIIXTjR/dQew3kknZiBZ9uD+Zy5ZeWOZUgYyAQwya3HBhqgOZ2ui2WJz8lUNDput
PgGf886Cb6eTl5I3kTnvHrsiwFfrWzjpD+5A1zwBMHZtJLfW5hG5fWtNJzTeihqX
N1sCUsTSdM7gspp0Xjkg8KpEPHccg0xe7cXGr9QCT9mXodppWnyvh+abowH5aNCd
Na1+VwrtWL7kCKQo6JErId8wn0Wbam+9ymvY68occCgrNDcnCAgtmttOQDlgqJFd
WyKMM9vnQkRpObZunUCJfH9wEMKx6OlRFBJVftdL40J/A8+DsVWNZ/qMMBkb5Qpg
fNkESE1PePlH8kJqkbzs/uyXVpP0vb7/2EEx8+0ilo9lj3iU3Kektp55cGMNRBRL
EwM0Ft2kTl3XTxcA8OAyHnF5FgKJpI6Cqh6PYo33uN1Od/SUv5INIG8VfUiFUE6E
8QLUck5ucpEvzFp0FRlU+ZAXE4LFgb3ZTLRE9yUljagy/EC2LL+O3qxnc9/iY4V/
NI07uSLZxb5Bpw8h9/9s
=QDBR
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/cohuck/tags/s390x-20170421' into staging
The first batch of s390x changes for 2.10:
- the new compat machine
- several cleanups and optimizations
- introspection for css ids
# gpg: Signature made Fri 21 Apr 2017 08:36:25 BST
# gpg: using RSA key 0xDECF6B93C6F02FAF
# gpg: Good signature from "Cornelia Huck <huckc@linux.vnet.ibm.com>"
# gpg: aka "Cornelia Huck <cornelia.huck@de.ibm.com>"
# Primary key fingerprint: C3D0 D66D C362 4FF6 A8C0 18CE DECF 6B93 C6F0 2FAF
* remotes/cohuck/tags/s390x-20170421:
s390x: Drop useless casts
s390x: register I/O adapters per ISC during init
s390x/flic: cache flic in s390_get_flic
s390x: initialize flic before I/O subsystems
s390x: use enum for adapter type and standardize its naming
s390x/css: consolidate the devno property for ccw devices
s390x/css: provide introspection for virtual subchannel and device busid
s390x/css: introduce read-only property type for device ids
s390x/pci: make printf always compile in debug output
s390x/kvm: make printf always compile in debug output
s390x: introduce 2.10 compat machine
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
An upcoming Coccinelle cleanup script wanted to reformat the casts
present in this file - but on closer look, we don't need the casts
at all because C automatically converts void* to any other pointer.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20170405194741.18956-4-eblake@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Wrapped printf calls inside debug macros (DPRINTF) in `if` statement.
This will ensure that printf function will always compile even if debug
output is turned off and, in turn, will prevent bitrot of the format
strings.
Signed-off-by: Danil Antonov <g.danil.anto@gmail.com>
Message-Id: <CA+KKJYAhsuTodm3s2rK65hR=-Xi5+Z7Q+M2nJYZQf2wa44HfOg@mail.gmail.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
This removes the assert(kvm_enabled()) from kvmppc_host_cpu_initfn()
This assert can never be triggered as the function is only registered
when KVM is available (see also 4c315c2
"qdev: Protect device-list-properties against broken devices").
So we can remove the cannot_destroy_with_object_finalize_yet from
kvmppc_host_cpu_class_init() without fear and beyond reproach.
(as it has already be done for i386 with 771a13e "i386: Unset
cannot_destroy_with_object_finalize_yet on "host" model" and
e435601 "target-i386: Remove assert(kvm_enabled()) from
host_x86_cpu_initfn()")
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Message-Id: <20170414083717.13641-3-lvivier@redhat.com>
Acked-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Now that we've rewritten M-profile exception return so that the magic
PC values are not visible to other parts of QEMU, we can delete the
special casing of them elsewhere.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1491844419-12485-10-git-send-email-peter.maydell@linaro.org
On M profile, return from exceptions happen when code in Handler mode
executes one of the following function call return instructions:
* POP or LDM which loads the PC
* LDR to PC
* BX register
and the new PC value is 0xFFxxxxxx.
QEMU tries to implement this by not treating the instruction
specially but then catching the attempt to execute from the magic
address value. This is not ideal, because:
* there are guest visible differences from the architecturally
specified behaviour (for instance jumping to 0xFFxxxxxx via a
different instruction should not cause an exception return but it
will in the QEMU implementation)
* we have to account for it in various places (like refusing to take
an interrupt if the PC is at a magic value, and making sure that
the MPU doesn't deny execution at the magic value addresses)
Drop these hacks, and instead implement exception return the way the
architecture specifies -- by having the relevant instructions check
for the magic value and raise the 'do an exception return' QEMU
internal exception immediately.
The effect on the generated code is minor:
bx lr, old code (and new code for Thread mode):
TCG:
mov_i32 tmp5,r14
movi_i32 tmp6,$0xfffffffffffffffe
and_i32 pc,tmp5,tmp6
movi_i32 tmp6,$0x1
and_i32 tmp5,tmp5,tmp6
st_i32 tmp5,env,$0x218
exit_tb $0x0
set_label $L0
exit_tb $0x7f2aabd61993
x86_64 generated code:
0x7f2aabe87019: mov %ebx,%ebp
0x7f2aabe8701b: and $0xfffffffffffffffe,%ebp
0x7f2aabe8701e: mov %ebp,0x3c(%r14)
0x7f2aabe87022: and $0x1,%ebx
0x7f2aabe87025: mov %ebx,0x218(%r14)
0x7f2aabe8702c: xor %eax,%eax
0x7f2aabe8702e: jmpq 0x7f2aabe7c016
bx lr, new code when in Handler mode:
TCG:
mov_i32 tmp5,r14
movi_i32 tmp6,$0xfffffffffffffffe
and_i32 pc,tmp5,tmp6
movi_i32 tmp6,$0x1
and_i32 tmp5,tmp5,tmp6
st_i32 tmp5,env,$0x218
movi_i32 tmp5,$0xffffffffff000000
brcond_i32 pc,tmp5,geu,$L1
exit_tb $0x0
set_label $L1
movi_i32 tmp5,$0x8
call exception_internal,$0x0,$0,env,tmp5
x86_64 generated code:
0x7fe8fa1264e3: mov %ebp,%ebx
0x7fe8fa1264e5: and $0xfffffffffffffffe,%ebx
0x7fe8fa1264e8: mov %ebx,0x3c(%r14)
0x7fe8fa1264ec: and $0x1,%ebp
0x7fe8fa1264ef: mov %ebp,0x218(%r14)
0x7fe8fa1264f6: cmp $0xff000000,%ebx
0x7fe8fa1264fc: jae 0x7fe8fa126509
0x7fe8fa126502: xor %eax,%eax
0x7fe8fa126504: jmpq 0x7fe8fa122016
0x7fe8fa126509: mov %r14,%rdi
0x7fe8fa12650c: mov $0x8,%esi
0x7fe8fa126511: mov $0x56095dbeccf5,%r10
0x7fe8fa12651b: callq *%r10
which is a difference of one cmp/branch-not-taken. This will
be lost in the noise of having to exit generated code and
look up the next TB anyway.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1491844419-12485-9-git-send-email-peter.maydell@linaro.org
For M profile exception-return handling we'd like to generate different
code for some instructions depending on whether we are in Handler
mode or Thread mode. This isn't the same as "are we privileged
or user", so we need an extra bit in the TB flags to distinguish.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1491844419-12485-8-git-send-email-peter.maydell@linaro.org
We now test for "are we singlestepping" in several places and
it's not a trivial check because we need to care about both
architectural singlestep and QEMU gdbstub singlestep. We're
also about to add another place that needs to make this check,
so pull the condition out into a function.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1491844419-12485-7-git-send-email-peter.maydell@linaro.org
Move the code to generate the "condition failed" instruction
codepath out of the if (singlestepping) {} else {}. This
will allow adding support for handling a new is_jmp type
which can't be neatly split into "singlestepping case"
versus "not singlestepping case".
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1491844419-12485-6-git-send-email-peter.maydell@linaro.org
Move the utility routines gen_set_condexec() and gen_set_pc_im()
up in the file, as we will want to use them from a function
placed earlier in the file than their current location.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1491844419-12485-5-git-send-email-peter.maydell@linaro.org
We currently have two places that do:
if (dc->ss_active) {
gen_step_complete_exception(dc);
} else {
gen_exception_internal(EXCP_DEBUG);
}
Factor this out into its own function, as we're about to add
a third place that needs the same logic.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1491844419-12485-4-git-send-email-peter.maydell@linaro.org
In Thumb mode, the only instructions which can cause an interworking
branch by writing the PC are BLX, BX, BXJ, LDR, POP and LDM. Unlike
ARM mode, data processing instructions which target the PC do not
cause interworking branches.
When we added support for doing interworking branches on writes to
PC from data processing instructions in commit 21aeb3430c, we
accidentally changed a Thumb instruction to have interworking
branch behaviour for writes to PC. (MOV, MOVS register-shifted
register, encoding T2; this is the standard encoding for
LSL/LSR/ASR/ROR (register).)
For this encoding, behaviour with Rd == R15 is specified as
UNPREDICTABLE, so allowing an interworking branch is within
spec, but it's confusing and differs from our handling of this
class of UNPREDICTABLE for other Thumb ALU operations. Make
it perform a simple (non-interworking) branch like the others.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1491844419-12485-3-git-send-email-peter.maydell@linaro.org
For M-profile CPUs, the BXJ instruction does not exist at all, and
the encoding should always UNDEF. We were accidentally implementing
it to behave like A-profile BXJ; correct the error.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1491844419-12485-2-git-send-email-peter.maydell@linaro.org
In tlb_fill() we construct a syndrome register value from a
fault status register value which is filled in by arm_tlb_fill().
arm_tlb_fill() returns FSR values which might be in the format
used with short-format page descriptors, or the format used
with long-format (LPAE) descriptors. The syndrome register
always uses LPAE-format FSR status codes.
It isn't actually possible to end up delivering a syndrome
register value to the guest for a fault which is reported
with a short-format FSR (that kind of stage 1 fault will only
happen for an AArch32 translation regime which doesn't have
a syndrome register, and can never be redirected to an AArch64
or Hyp exception level). Add an assertion which checks this,
and adjust the code so that we construct a syndrome with
an invalid status code, rather than allowing set bits in
the FSR input to randomly corrupt other fields in the syndrome.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1491486152-24304-1-git-send-email-peter.maydell@linaro.org
The excnames[] array is defined in internals.h because we used
to use it from two different source files for handling logging
of AArch32 and AArch64 exception entry. Refactoring means that
it's now used only in arm_log_exception() in helper.c, so move
the array into that function.
Suggested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1491821097-5647-1-git-send-email-peter.maydell@linaro.org
Recent changes have added new EXCP_ values to ARM but forgot
to update the excnames[] array which is used to provide
human-readable strings when printing information about the
exception for debug logging. Add the missing entries, and
add a comment to the list of #defines to help avoid the mistake
being repeated in future.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1491486340-25988-1-git-send-email-peter.maydell@linaro.org
Anything that calls into HW emulation must be protected by the BQL.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Acked-by: Eduardo Habkost <ehabkost@redhat.com>
Running QEMU with "qemu-system-x86_64 -M none -nographic -m 256" and executing
"dump-guest-memory /dev/null 0 8192" results in segfault.
Fix by checking if we have CPU.
Signed-off-by: Iwona Kotlarska <iwona260909@gmail.com>
Message-Id: <20170330050924.22134-1-iwona260909@gmail.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Marcel Apfelbaum <marcel@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Fixed up title
The existing code for "host" and "max" CPU models overrides every
single feature in the CPU object at realize time, even the ones
that were explicitly enabled or disabled by the user using
"feat=on" or "feat=off", while features set using +feat/-feat are
kept.
This means "-cpu host,+invtsc" works as expected, while
"-cpu host,invtsc=on" doesn't.
This was a known bug, already documented in a comment inside
x86_cpu_expand_features(). What makes this bug worse now is that
libvirt 3.0.0 and newer now use "feat=on|off" instead of
+feat/-feat when it detects a QEMU version that supports it (see
libvirt commit d47db7b16dd5422c7e487c8c8ee5b181a2f9cd66).
Change the feature property getter/setter to set a
env->user_features field, to keep track of features that were
explicitly changed using QOM properties. Then make the
max_features code not override user features when handling "-cpu
host" and "-cpu max".
This will also allow us to remove the plus_features/minus_features
hack in the future, but I plan to do that after 2.9.0 is
released.
Reported-by: Jiri Denemark <jdenemar@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20170327144815.8043-3-ehabkost@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Tested-by: Jiri Denemark <jdenemar@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Instead of passing a pointer to the feature property getter and
setter functions, pass a FeatureWord enum so they can perform
other actions related to the feature flag.
This will be used to add a new "user_features" field to keep
track of features that were explicitly set by the user.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20170327144815.8043-2-ehabkost@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Tested-by: Jiri Denemark <jdenemar@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
This fixes the bug: 'user-to-root privesc inside VM via bad translation
caching' reported by Jann Horn here:
https://bugs.chromium.org/p/project-zero/issues/detail?id=1122
Reviewed-by: Richard Henderson <rth@twiddle.net>
CC: Peter Maydell <peter.maydell@linaro.org>
CC: Paolo Bonzini <pbonzini@redhat.com>
Reported-by: Jann Horn <jannh@google.com>
Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
Message-Id: <20170323175851.14342-1-bobby.prani@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Returning NULL from get_max_cpu_model results in a SIGSEGV runtime error.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170130131517.8092-1-sw@weilnetz.de>
Cc: qemu-stable@nongnu.org
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Our implementation of writes to the APSR for M-profile via the MSR
instruction was badly broken.
First and worst, we had the sense wrong on the test of bit 2 of the
SYSm field -- this is supposed to request an APSR write if bit 2 is 0
but we were doing it if bit 2 was 1. This bug was introduced in
commit 58117c9bb4, so hasn't been in a QEMU release.
Secondly, the choice of exactly which parts of APSR should be written
is defined by bits in the 'mask' field. We were not passing these
through from instruction decode, making it impossible to check them
in the helper.
Pass the mask bits through from the instruction decode to the helper
function and process them appropriately; fix the wrong sense of the
SYSm bit 2 check.
Invalid mask values and invalid combinations of mask and register
number are UNPREDICTABLE; we choose to treat them as if the mask
values were valid.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1487616072-9226-5-git-send-email-peter.maydell@linaro.org
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
The MRS instruction requires that bits [19..16] are all 1s, and for
A/R profile also that bits [7..0] are all 0s. At this point in the
decode tree we have checked all of the rest of the instruction but
were allowing these to be any value. If these bits are not set then
the result is architecturally UNPREDICTABLE, but choosing to UNDEF is
more helpful to the user and avoids unexpected odd behaviour if the
encodings are used for some purpose in future architecture versions.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1487616072-9226-4-git-send-email-peter.maydell@linaro.org
M profile doesn't have the MSR(banked) and MRS(banked) instructions
and uses the encodings for different kinds of M-profile MRS/MSR.
Guard the relevant bits of the decode logic to make sure we don't
accidentally fall into them by accident on M-profile.
(The bit being checked for this (bit 5) is part of the SYSm field on
M-profile, but since no currently allocated system registers have
encodings with bit 5 of SYSm set, this hasn't been a problem in
practice.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1487616072-9226-3-git-send-email-peter.maydell@linaro.org
M profile doesn't have the HVC or SMC encodings, so make them always
UNDEF rather than generating calls to helper functions that assume
A/R profile.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1487616072-9226-2-git-send-email-peter.maydell@linaro.org
It is unnecessary to test R6 from delay/forbidden slot check
in gen_msa_branch().
https://bugs.launchpad.net/qemu/+bug/1663287
Reported-by: Brian Campbell <bacam@z273.org.uk>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
this fixes many warnings like:
target/mips/translate.c:6253:13: warning: Value stored to 'rn' is never read
rn = "invalid sel";
^ ~~~~~~~~~~~~~
Reported-by: Clang Static Analyzer
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
static code analyzer complain:
target/mips/helper.c:453:5: warning: Function call argument is an uninitialized value
qemu_log_mask(CPU_LOG_MMU,
^~~~~~~~~~~~~~~~~~~~~~~~~~
'physical' and 'prot' are uninitialized if 'ret' is not TLBRET_MATCH.
Reported-by: Clang Static Analyzer
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
The only functional difference between the GENERATED_HEADERS
and GENERATED_SOURCES variables is that 'Makefile' has a
dependancy on GENERATED_HEADERS, causing generated header files
to be created immediatey at the start of the build process.
There is no reason why this early creation should be restricted
to the .h files, and not include .c files too. Merge both of
the variables into a single GENERATED_FILES variable to make
it clear it is for any type of generated file.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Message-id: 20170228122901.24520-2-berrange@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This dependency is the wrong way, and we will need util/qemu-timer.h from
sysemu/cpus.h in the next patch.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When a KVM_{GET,SET}_MSRS ioctl() fails, it is difficult to find
out which MSR caused the problem. Print an error message for
debugging, before we trigger the (ret == cpu->kvm_msr_buf->nmsrs)
assert.
Suggested-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20170309194634.28457-1-ehabkost@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The power state spec section 5.1.5 AFFINITY_INFO defines the
affinity info return values as
0 ON
1 OFF
2 ON_PENDING
I grepped QEMU for power_state to ensure that no assumptions
of OFF=0 were being made.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Message-id: 20170303123232.4967-1-drjones@redhat.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In armv8, this register implements more than a single bit, with
fine-grained enables for read access to event counters, cycles
counters, and write access to the software increment. This change
implements those checks using custom access functions for the relevant
registers.
Signed-off-by: Andrew Baumann <Andrew.Baumann@microsoft.com>
Message-id: 20170228215801.10472-2-Andrew.Baumann@microsoft.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: move a couple of access functions to be only compiled
ifndef CONFIG_USER_ONLY to avoid compiler warnings]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
A bug was introduced in following commit:
dc0ad84 target/ppc: update overflow flags for add/sub
As for 32-bit ppc target extracting bit 63 for overflow is not correct.
Made it dependent on TARGET_LOG_BITS. This had broken booting MacOS
9.2.1 image
Reported-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
The SPR UAMR has the number 13, and not 12. (Fortunately it seems like
Linux is not using this register yet - only the privileged version with
number 29 ... that's why nobody noticed this problem yet)
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
So far xtensa provides fixed dummy argc/argv for the corresponding
semihosting calls. Now that there are semihosting_get_argc and
semihosting_get_arg, use them to pass actual command line arguments
to guest.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
glibc blacklists TSX on Haswell CPUs with model==60 and
stepping < 4. To make the Haswell CPU model more useful, make
those guests actually use TSX by changing CPU stepping to 4.
References:
* glibc commit 2702856bf45c82cf8e69f2064f5aa15c0ceb6359
https://sourceware.org/git/?p=glibc.git;a=commit;h=2702856bf45c82cf8e69f2064f5aa15c0ceb6359
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20170309181212.18864-4-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Some Intel CPUs are known to have a broken TSX implementation. A
microcode update from Intel disabled TSX on those CPUs, but
GET_SUPPORTED_CPUID might be reporting it as supported if the
hosts were not updated yet.
Manually fixup the GET_SUPPORTED_CPUID data to ensure we will
never enable TSX when running on those hosts.
Reference:
* glibc commit 2702856bf45c82cf8e69f2064f5aa15c0ceb6359:
https://sourceware.org/git/?p=glibc.git;a=commit;h=2702856bf45c82cf8e69f2064f5aa15c0ceb6359
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20170309181212.18864-3-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Helper function for code that needs to check the host CPU
vendor/family/model/stepping values.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20170309181212.18864-2-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
..just like the rest of the displayed ESR register. Otherwise people
might scratch their heads if a not obviously hex number is displayed
for the EC field.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: KONRAD Frederic <fred.konrad@greensocs.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Paths through the softmmu code during code generation now need to be audited
to check for double locking of tb_lock. In particular, VMEXIT can take tb_lock
through cpu_vmexit -> cpu_x86_update_cr4 -> tlb_flush.
To avoid this, split VMEXIT delivery in two parts, similar to what is done with
exceptions. cpu_vmexit only records the VMEXIT exit code and information, and
cc->do_interrupt can then deliver it when it is safe to take the lock.
Reported-by: Alexander Boettcher <alexander.boettcher@genode-labs.com>
Suggested-by: Richard Henderson <rth@twiddle.net>
Tested-by: Alexander Boettcher <alexander.boettcher@genode-labs.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Hold BQL when accessing timer which can cause interrupts
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Make sure we have the BQL held when processing interrupts.
Reported-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
Helpers that can trigger IO events (including interrupts) need to be
protected by the BQL. I've updated all the helpers that call into an
ioinst_handle_* functions.
Reported-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
IRQ modification is part of device emulation and should be done while
the BQL is held to prevent races when MTTCG is enabled. This adds
assertions in the hw emulation layer and wraps the calls from helpers
in the BQL.
Reported-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
This suppresses the incorrect warning when forcing MTTCG for x86
guests on x86 hosts. A future patch will still warn when
TARGET_SUPPORT_MTTCG hasn't been defined for the guest (which is still
pending for x86).
Reported-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Acked-by: Eduardo Habkost <ehabkost@redhat.com>
Looks like my previous batch wasn't quite the last before hard freeze.
This has a handful of bugfixes to go in. They're all genuine
bugfixes, though not regressions in some cases.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABCAAGBQJYvOCUAAoJEGw4ysog2bOSQWgQAKzPeIqz8I/1eXL+zmZCUaiU
J2gyjzfaKkQ/AVGPtT45ZjJsihxSFbZT6koxXtEaxwq5DD87yXQOqA/d+BH7jr5d
75FGjVzKOA0IKQymySztwoC2j/ftWmmSx0N6YUmL0QcXCISS1YHRvdQkdXf6j4I/
XtK1FA34wmCsTK1AgZ9WDxjABdkHP+7FDRBpVmr01Nv1TeK2Xms2MqJ5Wku/lOX/
6bg1KbC8pVHy5YZhIpRFzgGxaMr2UcJ0Q3YR9fD/4UW/k518sJk+i2xlagVsFxyG
gqfPolv0wjwuGpYt42UyFG4IouCbKN+MChU5MBIaqU10VouOw+0/W+p+1ZOHgdB8
GoaBGyfuJ6/i4EQL0/+FL4hPOI5vHLliWxPfMJxDL5ujP0cFaPm2XbK5Yqxksu3m
uYp3yYIbiSaF8QUxbBjAAoKPdVpP5dsgHjAlxecwCUGlIo0Ur3uphnU5lPoNlvS4
5ZcDDlMGjPb0oIHfdPt2ai8g+32uAsD7X7pi+qI0x+srSnjisRpOT2wKv0otMbGx
U4j01/Na2DjFjhGW+vNm9UYsE/QgKr6pU9z3jUXOIplX1HBXirtfv5C/OypCN7Zj
LgqsmiMWMJFjSLk8N8cxeM1w839B3wEM+2+46su7/qpW9sd0jKvHk0cJDyZPzn29
zQ52CbQQiewXM8y+mffe
=/RZL
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-2.9-20170306' into staging
ppc patch queue for 2017-03-06
Looks like my previous batch wasn't quite the last before hard freeze.
This has a handful of bugfixes to go in. They're all genuine
bugfixes, though not regressions in some cases.
# gpg: Signature made Mon 06 Mar 2017 04:07:48 GMT
# gpg: using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-2.9-20170306:
target/ppc: use helper for excp handling
target/ppc: fmadd: add macro for updating flags
target/ppc: fmadd check for excp independently
spapr: ensure that all threads within core are on the same NUMA node
ppc/xics: register reset handlers for the ICP and ICS objects
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use the helper routine float[32,64]_maddsub_update_excp() in VSX_MADD
macro.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Adds FPU_MADDSUB_UPDATE macro, this will be used for other routines
having float32/16
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Current order of checking does not confirm with the spec
(ISA 3.0: MultiplyAddDP page-469). Change the order and make them
independent of each other.
For example: a = infinity, b = zero, c = SNaN, this should set both
VXIMZ and VXNAN
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The split between tests/test-qobject-input-visitor.c and
tests/test-qobject-input-strict.c now makes less sense than ever. The
next commit will take care of that.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <1488544368-30622-20-git-send-email-armbru@redhat.com>
This will probably be my last pull request before the hard freeze. It
has some new work, but that has all been posted in draft before the
soft freeze, so I think it's reasonable to include in qemu-2.9.
This batch has:
* A substantial amount of POWER9 work
* Implements the legacy (hash) MMU for POWER9
* Some more preliminaries for implementing the POWER9 radix
MMU
* POWER9 has_work
* Basic POWER9 compatibility mode handling
* Removal of some premature tests
* Some cleanups and fixes to the existing MMU code to make the
POWER9 work simpler
* A bugfix for TCG multiply adds on power
* Allow pseries guests to access PCIe extended config space
This also includes a code-motion not strictly in ppc code - moving
getrampagesize() from ppc code to exec.c. This will make some future
VFIO improvements easier, Paolo said it was ok to merge via my tree.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABCAAGBQJYuOEEAAoJEGw4ysog2bOSHD0P/jBg/qr/4KnsB1KhnlVrB2sP
vy2d3bGGlUWr9Z+CK/PMCRB8ekFgQLjidLIXji6mviUocv6m3WsVrnbLF/oOL/IT
NPMVAffw7q804YVu1Ns9R82d6CIqHTy//bpg69tFMcJmhL9fqPan3wTZZ9JeiyAm
SikqkAHBSW4SxKqg8ApaSqx5L2QTqyfkClR0sLmgM0JtmfJrbobpQ6bMtdPjUZ9L
n2gnpO2vaWCa1SEQrRrdELqvcD8PHkSJapWOBXOkpGWxoeov/PYxOgkpdDUW4qYY
lVLtp1Vd3OB/h3Unqfw32DNiHA5p89hWPX5UybKMgRVL9Cv2/lyY47pcY8XTeNzn
bv84YRbFJeI+GgoEnghmtq+IM8XiW/cr9rWm9wATKfKGcmmFauumALrsffUpHVCM
4hSNgBv5t2V9ptZ+MDlM/Ku+zk9GoqwQ+hemdpVtiyhOtGUPGFBn5YLE4c2DHFxV
+L9JtBnFn8obnssNoz0wL+QvZchT1qUHMhH5CWAanjw9CTDp/YwQ2P01zK+00s9d
4cB7fUG3WNto5eXXEGMaXeDsUEz8z//hTe3j5sVbnHsXi0R3dhv7iryifmx4bUKU
H9EwAc+uNUHbvBy7u6IWg0I8P2n00CCO6JqXijQ92zELJ5j0XhzHUI2dOXn+zyEo
3FZu56LFnSSUBEXuTjq4
=PcNw
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-2.9-20170303' into staging
ppc patch queuye for 2017-03-03
This will probably be my last pull request before the hard freeze. It
has some new work, but that has all been posted in draft before the
soft freeze, so I think it's reasonable to include in qemu-2.9.
This batch has:
* A substantial amount of POWER9 work
* Implements the legacy (hash) MMU for POWER9
* Some more preliminaries for implementing the POWER9 radix
MMU
* POWER9 has_work
* Basic POWER9 compatibility mode handling
* Removal of some premature tests
* Some cleanups and fixes to the existing MMU code to make the
POWER9 work simpler
* A bugfix for TCG multiply adds on power
* Allow pseries guests to access PCIe extended config space
This also includes a code-motion not strictly in ppc code - moving
getrampagesize() from ppc code to exec.c. This will make some future
VFIO improvements easier, Paolo said it was ok to merge via my tree.
# gpg: Signature made Fri 03 Mar 2017 03:20:36 GMT
# gpg: using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-2.9-20170303:
target/ppc: rewrite f[n]m[add,sub] using float64_muladd
spapr: Small cleanup of PPC MMU enums
spapr_pci: Advertise access to PCIe extended config space
target/ppc: Rework hash mmu page fault code and add defines for clarity
target/ppc: Move no-execute and guarded page checking into new function
target/ppc: Add execute permission checking to access authority check
target/ppc: Add Instruction Authority Mask Register Check
hw/ppc/spapr: Add POWER9 to pseries cpu models
target/ppc/POWER9: Add cpu_has_work function for POWER9
target/ppc/POWER9: Add POWER9 pa-features definition
target/ppc/POWER9: Add POWER9 mmu fault handler
target/ppc: Don't gen an SDR1 on POWER9 and rework register creation
target/ppc: Add patb_entry to sPAPRMachineState
target/ppc/POWER9: Add POWERPC_MMU_V3 bit
powernv: Don't test POWER9 CPU yet
exec, kvm, target-ppc: Move getrampagesize() to common code
target/ppc: Add POWER9/ISAv3.00 to compat_table
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These cause compilation failures on CentOS 6 or other operating
systems with older GCCs.
Cc: Richard Henderson <rth@twiddle.net>
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These cause compilation failures on CentOS 6 or other operating
systems with older GCCs.
Cc: Richard Henderson <rth@twiddle.net>
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1488558530-21016-1-git-send-email-pbonzini@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Migration from a 2.3.0 qemu results in a reboot on the receiving QEMU
due to a disagreement about SM (System management) interrupts.
2.3.0 didn't have much SMI support, but it did set CPU_INTERRUPT_SMI
and this gets into the migration stream, but on 2.3.0 it
never got delivered.
~2.4.0 SMI interrupt support was added but was broken - so
that when a 2.3.0 stream was received it cleared the CPU_INTERRUPT_SMI
but never actually caused an interrupt.
The SMI delivery was recently fixed by 68c6efe07a, but the
effect now is that an incoming 2.3.0 stream takes the interrupt it
had flagged but it's bios can't actually handle it(I think
partly due to the original interrupt not being taken during boot?).
The consequence is a triple(?) fault and a reboot.
Tested from:
2.3.1 -M 2.3.0
2.7.0 -M 2.3.0
2.8.0 -M 2.3.0
2.8.0 -M 2.8.0
This corresponds to RH bugzilla entry 1420679.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20170223133441.16010-1-dgilbert@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Anton Nefedov <anton.nefedov@virtuozzo.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: Eric Blake <eblake@redhat.com>
Message-Id: <1487614915-18710-3-git-send-email-den@openvz.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Call kvm_on_sigbus_vcpu asynchronously from the VCPU thread.
Information for the SIGBUS can be stored in thread-local variables
and processed later in kvm_cpu_exec.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Build it on kvm_arch_on_sigbus_vcpu instead. They do the same
for "action optional" SIGBUSes, and the main thread should never get
"action required" SIGBUSes because it blocks the signal.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Move the KVM "eat signals" code under CONFIG_LINUX, in preparation
for moving it to kvm-all.c; reraise non-MCE SIGBUS immediately,
without passing it to KVM.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use the softfloat api for fused multiply-add.
Introduce routine to set the FPSCR flags VXNAN, VXIMZ nad VMISI.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The PPC MMU types are sometimes treated as if they were a bit field
and sometime as if they were an enum which causes maintenance
problems: flipping bits in the MMU type (which is done on both the 1TB
segment and 64K segment bits) currently produces new MMU type
values that are not handled in every "switch" on it, sometimes causing
an abort().
This patch provides some macros that can be used to filter out the
"bit field-like" bits so that the remainder of the value can be
switched on, like an enum. This allows removal of all of the
"degraded" types from the list and should ease maintenance.
Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The hash mmu page fault handling code is responsible for generating ISIs
and DSIs when access permissions cause an access to fail. Part of this
involves setting the srr1 or dsisr registers to indicate what causes the
access to fail. Add defines for the bit fields of these registers and
rework the code to use these new defines in order to improve readability
and code clarity.
While we're here, update what is logged when an access fails to include
information as to what caused to access to fail for debug purposes.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
[dwg: Moved constants to cpu.h since they're not MMUv3 specific]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
A pte entry has bit fields which can be used to make a page no-execute or
guarded, if either of these bits are set then an instruction access to this
page will fail. Currently these bits are checked with the pp_prot function
however the ISA specifies that the access authority controlled by the
key-pp value pair should only be checked on an instruction access after
the no-execute and guard bits have already been verified to permit the
access.
Move the no-execute and guard bit checking into a new separate function.
Note that we can remove the check for the no-execute bit in the slb entry
since this check was already performed above when we obtained the slb
entry.
In the event that the no-execute or guard bits are set, an ISI should be
generated with the SRR1_NOEXEC_GUARD (0x10000000) bit set in srr1. Add a
define for this for clarity.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
[dwg: Move constants to cpu.h since they're not MMUv3 specific]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Basic storage protection defines various access authority permissions
based on a slb storage key and pte pp value pair. This access authority
defines read, write and execute permissions however currently we only
use this to control read and write permissions and ignore the execute
control.
Fix the code to allow execute permissions based on the key-pp value pair.
Execute is allowed under the same conditions which enable reads.
(i.e. read permission -> execute permission)
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The instruction authority mask register (IAMR) can be used to restrict
permissions for instruction fetch accesses on a per key basis for each
of 32 different key values. Access permissions are derived based on the
specific key value stored in the relevant page table entry.
The IAMR was introduced in, and is present in processors since, POWER8
(ISA v2.07). Thus introduce a function to check access permissions based
on the pte key value and the contents of the IAMR when handling a page
fault to ensure sufficient access permissions for an instruction fetch.
A hash pte contains a key value in bits 2:3|52:54 of the second double word
of the pte, this key value gives an index into the IAMR which contains 32
2-bit access masks. If the least significant bit of the 2-bit access mask
corresponding to the given key value is set (IAMR[key] & 0x1 == 0x1) then
the instruction fetch is not permitted and an ISI is generated accordingly.
While we're here, add defines for the srr1 bits to be set for the ISI for
clarity.
e.g.
pte:
dw0 [XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX]
dw1 [XX01XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX010XXXXXXXXX]
^^ ^^^
key = 01010 (0x0a)
IAMR: [XXXXXXXXXXXXXXXXXXXX01XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX]
^^
Access mask = 0b01
Test access mask: 0b01 & 0x1 == 0x1
Least significant bit of the access mask is set, thus the instruction fetch
is not permitted. We should generate an instruction storage interrupt (ISI)
with bit 42 of SRR1 set to indicate access precluded by virtual page class
key protection.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
[dwg: Move new constants to cpu.h, since they're not MMUv3 specific]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The cpu has work function is used to mask interrupts used to determine
if there is work for the cpu based on the LPCR. Add a function to do this
for POWER9 and add it to the POWER9 cpu definition. This is similar to that
for POWER8 except using the LPCR bits as defined for POWER9.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Add a new mmu fault handler for the POWER9 cpu and add it as the handler
for the POWER9 cpu definition.
This handler checks if the guest is radix or hash based on the value in the
partition table entry and calls the correct fault handler accordingly.
The hash fault handling code has also been updated to check if the
partition is using segment tables.
Currently only legacy hash (no segment tables) is supported.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
POWER9 doesn't have a storage description register 1 (SDR1) which is used
to store the base and size of the hash table. Thus we don't need to
generate this register on the POWER9 cpu model. While we're here, the
register generation code for 970, POWER5+, POWER<7/8/9> in general is a
mess where we call a generic function from a model specific function which
then attempts to call model specific functions, so rework this for
readability.
We update ppc_cpu_dump_state so that "info registers" will only display
the value of sdr1 if the register has been generated.
As mentioned above the register generation for the pcc->init_proc
function for 970, POWER5+, POWER7, POWER8 and POWER9 has been reworked
for improved clarity. Instead of calling init_proc_book3s_64 which then
attempts to generate the correct registers through a mess of if statements,
we remove this function and instead call the appropriate register
generation functions directly. This follows the register generation model
used for earlier cpu models (pre-970) whereby cpu specific registers are
generated directly in the init_proc function and makes it easier to
add/remove specific registers for new cpu models.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
ISA v3.00 adds the idea of a partition table which is used to store the
address translation details for all partitions on the system. The partition
table consists of double word entries indexed by partition id where the second
double word contains the location of the process table in guest memory. The
process table is registered by the guest via a h-call.
We need somewhere to store the address of the process table so we add an entry
to the sPAPRMachineState struct called patb_entry to represent the second
doubleword of a single partition table entry corresponding to the current
guest. We need to store this value so we know if the guest is using radix or
hash translation and the location of the corresponding process table in guest
memory. Since we only have a single guest per qemu instance, we only need one
entry.
Since the partition table is technically a hypervisor resource we require that
access to it is abstracted by the virtual hypervisor through the get_patbe()
call. Currently the value of the entry is never set (and thus
defaults to 0 indicating hash), but it will be required to both implement
POWER9 kvm support and tcg radix support.
We also add this field to be migrated as part of the sPAPRMachineState as we
will need it on the receiving side as the guest will never tell us this
information again and we need it to perform translation.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
For easier handling of future processors using the POWER9 or something
close to it, add a new bit in the MMU model. This was originally from a
revised version of 86cf1e9 "target/ppc/POWER9: Add ISAv3.00 MMU definition"
but the older version of the patch was already merged. This makes the
change on top of the original version.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
getrampagesize() returns the largest supported page size and mainly
used to know if huge pages are enabled.
However is implemented in target-ppc/kvm.c and not available
in TCG or other architectures.
This renames and moves gethugepagesize() to mmap-alloc.c where
fd-based analog of it is already implemented. This renames and moves
getrampagesize() to exec.c as it seems to be the common place for
helpers like this.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
compat_table contains the list of logical pvr compat modes which a cpu can
operate in. It is a list of struct CompatInfo which contains the given pvr
value for a compat mode, the pcr bits which should be set to operate in
that compat mode, the pcr level which must be present in pcr_supported for
a processor to support that compat mode and the max threads possible in
that compat mode.
Add an entry for the POWER9/ISAv3.00 logical pvr which represents a
processor running with support for logical pvr 0x0f000005. A processor
running in this mode should have PCR_COMPAT_3_00 set in the pcr (if
available in pcr_mask) and should have PCR_COMPAT_3_00 in pcr_supported
to indicate that it is capable of running in this compat mode.
Also add PCR_COMPAT_3_00 to the bits which must be set for all previous
compat modes. Since no processor models contain this bit yet in pcr_mask
it will never be set, but this ensures we don't forget to in the future.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
I was hoping to get this pull request squeezed in before the soft
freeze, but I ran into some difficulties during testing. Everything
here was at least posted before the soft freeze, so I'm hoping we can
still merge it for 2.9.
The biggest things here are:
* Cleanups to handling of hashed page tables, that will make
adding support for the POWER9 MMU easier
* Cleanups to the XICS interrupt controller that will make
implementing the powernv machine easier
* TCG implementation of extended overflow and carry handling for
POWER9
It also includes:
* Increasing the CPU limit for pseries to 1024 vCPUs
* Generating proper OF node names in qemu (making hotplug and
coldplug logic closer together)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABCAAGBQJYtlFaAAoJEGw4ysog2bOSPEIP/0eu92l+pP/05AROudxNu0NT
O+JMUCuUM6phK/O/NU74G1Lbmr601Uu+kElcfuMRILwNQ3QWnyDk3OsIERV2oOlC
9mqgqKfMIv29QZcN/U9gb2mBBakJV6z2gAF89CXpdmwxOjzyZR3DwiFccbCzKcl+
4XFlGYYirye7fBZr6ZsSoZcM4EPIYdNe2bY7ymqisXnsvy0C3PYTZXG1s7+s5CDZ
JWzKX1cgpePUj7rW0ecghP2MYKcKBxIzzqNbzl4ihObK7dSTfF2nUPVj6rYhtl8s
V2vQkhUveR4FcE/pwVbPAUWbQWsTk1NUbRkqB8AqV+xbTfaFIN5MdFdvJzGD0gTk
htzJFFXi6SQAsC0eH6RsF3JBFZe1HnDxql9A3htvTdPC1D/thFETg2nnb8Cd5EJM
X7bdouew5txSP2SEYDfXA2/2IS8Fh3ZD+mXM6Y56Rl4KDEd+LeClfE2xDUAhU4C6
rA0QGNu0d9nRlXEkj3vAQPRmdp1hzREve40ha0DnhQh4DSsCHJSk3yaDn2pQC1kZ
01EhJF65AK1jdEyhT7Tk0PIfSAzkNjvS+K2Y9kFaGh5X9PVjCeMOrZaaRGS2+Im1
3C7YJNUQb7VDjfHYKCQBzBXUJynzO8uDt9k8juhZyy67LvsHJBb+6DURSLV/y03k
KuvTG7yrEUqpNmycjd/M
=gI0E
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-2.9-20170301' into staging
ppc patch queue for 2017-03-01
I was hoping to get this pull request squeezed in before the soft
freeze, but I ran into some difficulties during testing. Everything
here was at least posted before the soft freeze, so I'm hoping we can
still merge it for 2.9.
The biggest things here are:
* Cleanups to handling of hashed page tables, that will make
adding support for the POWER9 MMU easier
* Cleanups to the XICS interrupt controller that will make
implementing the powernv machine easier
* TCG implementation of extended overflow and carry handling for
POWER9
It also includes:
* Increasing the CPU limit for pseries to 1024 vCPUs
* Generating proper OF node names in qemu (making hotplug and
coldplug logic closer together)
# gpg: Signature made Wed 01 Mar 2017 04:43:06 GMT
# gpg: using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-2.9-20170301: (50 commits)
Add PowerPC 32-bit guest memory dump support
ppc/xics: rename 'ICPState *' variables to 'icp'
ppc/xics: move InterruptStatsProvider to the sPAPR machine
ppc/xics: move ics-simple post_load under the machine
ppc/xics: remove the XICSState classes
ppc/xics: export the XICS init routines
ppc/xics: move the ICP array under the sPAPR machine
ppc/xics: register the reset handler of ICP objects
ppc/xics: simplify spapr_dt_xics() interface
ppc/xics: use the QOM interface to grab an ICP
ppc/xics: move the cpu_setup() handler under the ICPState class
ppc/xics: simplify the cpu_setup() handler
ppc/xics: move kernel_xics_fd out of KVMXICSState
ppc/xics: extend the QOM interface to handle ICPs
ppc/xics: remove the XICS list of ICS
ppc/xics: register the reset handler of ICS objects
ppc/xics: remove xics_find_source()
ppc/xics: use the QOM interface to resend irqs
ppc/xics: use the QOM interface to get irqs
ppc/xics: use the QOM interface under the sPAPR machine
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
"-cpu max" and query-cpu-model-expansion support for x86. This
should be the last x86 pull request before 2.9 soft freeze.
-----BEGIN PGP SIGNATURE-----
iQIcBAABCAAGBQJYtFKvAAoJECgHk2+YTcWmyCwQAJEBM8NdOmN14yA3qTiTs0Kd
D/3VWSxtu9t41g39+yho70c+ZnpqGW28/WbY7E4ovAPRIoUI/pmKACY42k+WrTmK
MIBMesp1YnkxhwrFW9CwgqiUV8nr5ZMlzW/pQU/GaXbcH+7KfObeI93iGhtGjWvi
4nNvrK7b5mz2wPU6s1j+Bz2mp0CMd/sktmiH93tyWU+KgU7NXvMDPInVkfvBMvZN
5D6JLIeKxrndbaaUgvGbR4SUUmRs8TZFYfEbOdkkjIqh7MAKVKCCipFaxWIEfndr
bs1MDmw6uIUaI55JuWaXb//BkS+jai1dmn+ZEzMoisetlheSwR8cEStFJBxcm/7n
WQfxUd6TuNJ9PC1FIvb/OHUCGvzb+vtbEYAMmvCv8BVMnMivN7WNliu3rNVRgWBK
OHClBPdgwoIx2cGt6ic1rvxHxcpjeJ/YXBzL/JbBkblckpxbNRcW1NRTZKHIe3vr
JcPMEoP8g5d9ZHOG0WqBhKtJ3vUrxF3xqBKuR1Ha7QWpyKe9YF+RKrIA9dZkhLy0
Jh0fPZn2PmQrbLuZTC1u7Sgp22Duy7RcfJ7SikR+uhMLtkvToqu3ywLteW6Ta1by
oinb2UYMazwpAKKDcab4GdNJHPOuhnDw58osBVTyBiiN1tJjH+BhnVV4bYJVpaHI
MJIx5QvwqocSO0qoDZxo
=KPbC
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/ehabkost/tags/x86-pull-request' into staging
x86 queue, 2017-02-27
"-cpu max" and query-cpu-model-expansion support for x86. This
should be the last x86 pull request before 2.9 soft freeze.
# gpg: Signature made Mon 27 Feb 2017 16:24:15 GMT
# gpg: using RSA key 0x2807936F984DC5A6
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>"
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6
* remotes/ehabkost/tags/x86-pull-request:
i386: Improve query-cpu-model-expansion full mode
i386: Implement query-cpu-model-expansion QMP command
i386: Define static "base" CPU model
i386: Don't set CPUClass::cpu_def on "max" model
i386: Make "max" model not use any host CPUID info on TCG
i386: Create "max" CPU model
qapi-schema: Comment about full expansion of non-migration-safe models
i386: Reorganize and document CPUID initialization steps
i386: Rename X86CPU::host_features to X86CPU::max_features
i386: Add ordering field to CPUClass
i386: Unset cannot_destroy_with_object_finalize_yet on "host" model
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Fixes the booting of ss20 roms.
Cc: qemu-stable@nongnu.org
Reported-by: Michael Russo <mike@papersolve.com>
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <rth@twiddle.net>
* raspi2: add gpio controller and sdhost controller, with
the wiring so the guest can switch which controller the
SD card is attached to
(this is sufficient to get raspbian kernels to boot)
* GICv3: support state save/restore from KVM
* update Linux headers to 4.11
* refactor and QOMify the ARMv7M container object
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABCAAGBQJYta9VAAoJEDwlJe0UNgzeMVoQAJXv3EEcz8mfHQXGbjoak7Md
RLwgsf2RRnjK9VsrXZuaH81FzpIUHpx3tV/w74w+VqOOUEo2g3QCv6kakZ2UYfS+
tsf3FgNyX/z/OzNcOaxn6CzBLpHATOWsFZSPVf3FPh81ytUaB2tf3BJZR845cVIe
0Yh+4klw2mYVMOX4UExyOrmifW58eQRKS3MFQTsKqchbOGdsQpCCnMCj5WhHC+rY
tRQg1542/0seS3pY55Qpi6Q080ePky6AJQc672vPIqd2bDN/klGhmPpZIPokXn95
vgjZe1/mdhcSX2xnUFiNyOBijjW7yUsL1Dx3LuoPH7tDqVsl3NWhJuhhfoSau1dY
suPuckqrqTPz1AwFML0NN+lQLlH/6pfV2ZeRQJSf6bEhVBBjcyeCzy3vrRRmQqrc
N2I9/4vCR22Yp+zIhGBwtNkgL3DVZFeiMQRwDe6lzMJhZOQ9Wz04bXHnEmo3Ht62
AZ9IUQBc+mgoPlmJXAo6Jia7AVZ0x+Nwoa1okoptywXAOpIHazpAuW04vvjgpBy3
VdcRqlDluv5azqHPmS26Adt54fZ21OkARKizE3kGOY47fJtMrOg+JK1AjvX3D/Iq
t2yjYdF1zN7JfkJzDZKuvmSsnovTfiIeTATkD49E5zaU0inBt6eqSihZwKQmY3SY
MzNb8mv8E7KraMw5HaWh
=IC84
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20170228-1' into staging
target-arm queue:
* raspi2: add gpio controller and sdhost controller, with
the wiring so the guest can switch which controller the
SD card is attached to
(this is sufficient to get raspbian kernels to boot)
* GICv3: support state save/restore from KVM
* update Linux headers to 4.11
* refactor and QOMify the ARMv7M container object
# gpg: Signature made Tue 28 Feb 2017 17:11:49 GMT
# gpg: using RSA key 0x3C2525ED14360CDE
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>"
# gpg: aka "Peter Maydell <pmaydell@gmail.com>"
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>"
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* remotes/pmaydell/tags/pull-target-arm-20170228-1: (21 commits)
bcm2835: add sdhost and gpio controllers
bcm2835_gpio: add bcm2835 gpio controller
hw/sd: add card-reparenting function
qdev: Have qdev_set_parent_bus() handle devices already on a bus
hw/intc/arm_gicv3_kvm: Reset GICv3 cpu interface registers
target-arm: Add GICv3CPUState in CPUARMState struct
hw/intc/arm_gicv3_kvm: Implement get/put functions
hw/intc/arm_gicv3_kvm: Add ICC_SRE_EL1 register to vmstate
update Linux headers to 4.11
update-linux-headers: update for 4.11
stm32f205: Rename 'nvic' local to 'armv7m'
stm32f205: Create armv7m object without using armv7m_init()
armv7m: Split systick out from NVIC
armv7m: Don't put core v7M devices under CONFIG_STELLARIS
armv7m: Make bitband device take the address space to access
armv7m: Make NVIC expose a memory region rather than mapping itself
armv7m: Make ARMv7M object take memory region link
armv7m: Use QOMified armv7m object in armv7m_init()
armv7m: QOMify the armv7m container
armv7m: Move NVICState struct definition into header
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This patch extends support for the `dump-guest-memory` command to the
32-bit PowerPC architecture. It relies on the assumption that a 64-bit
guest will not dump a 32-bit core file (and vice versa).
[dwg: I suspect this patch won't cover all cases, in particular a
32-bit machine type on a 64-bit qemu build. However, it does strictly
more than what we had before, so might as well apply as a starting
point]
Signed-off-by: Mike Nawrocki <michael.nawrocki@gtri.gatech.edu>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
mcrxrx: Move to CR from XER Extended
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Add helper_div_compute_ov() in the int_helper for updating the overflow
flags.
For Divide Word:
SO, OV, and OV32 bits reflects overflow of the 32-bit result
For Divide DoubleWord:
SO, OV, and OV32 bits reflects overflow of the 64-bit result
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
For Multiply Word:
SO, OV, and OV32 bits reflects overflow of the 32-bit result
For Multiply DoubleWord:
SO, OV, and OV32 bits reflects overflow of the 64-bit result
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* SO and OV reflects overflow of the 64-bit result in 64-bit mode and
overflow of the low-order 32-bit result in 32-bit mode
* OV32 reflects overflow of the low-order 32-bit independent of the mode
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Adds routine to compute ca32 - gen_op_arith_compute_ca32
For 64-bit mode use the compute ca32 routine. While for 32-bit mode, CA
and CA32 will have same value.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
POWER ISA 3.0 adds CA32 and OV32 status in 64-bit mode. Add the flags
and corresponding defines.
Moreover, CA32 is updated when CA is updated and OV32 is updated when OV
is updated.
Arithmetic instructions:
* Addition and Substractions:
addic, addic., subfic, addc, subfc, adde, subfe, addme, subfme,
addze, and subfze always updates CA and CA32.
=> CA reflects the carry out of bit 0 in 64-bit mode and out of
bit 32 in 32-bit mode.
=> CA32 reflects the carry out of bit 32 independent of the
mode.
=> SO and OV reflects overflow of the 64-bit result in 64-bit
mode and overflow of the low-order 32-bit result in 32-bit
mode
=> OV32 reflects overflow of the low-order 32-bit independent of
the mode
* Multiply Low and Divide:
For mulld, divd, divde, divdu and divdeu: SO, OV, and OV32 bits
reflects overflow of the 64-bit result
For mullw, divw, divwe, divwu and divweu: SO, OV, and OV32 bits
reflects overflow of the 32-bit result
* Negate with OE=1 (nego)
For 64-bit mode if the register RA contains
0x8000_0000_0000_0000, OV and OV32 are set to 1.
For 32-bit mode if the register RA contains 0x8000_0000, OV and
OV32 are set to 1.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
SDR_64_HTABORG, which indicates the bits of the SDR1 register to use for
the base of a 64-bit machine's hashed page table (HPT) isn't correct. It
includes the top 46 bits of the register, but in fact the top 4 bits must
be zero (according to the ISA v2.07). No actual implementation has
supported close to 2^60 bytes of physical address space, so it's kind of
irrelevant, but we might as well correct this.
In addition, although we checked for bad size values in SDR1, we never
reported an error if entirely invalid bits were set there. Add this check
to ppc_store_sdr1().
Reported-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The function ppc_hash64_set_sdr1 basically checked the htabsize and set an
error if it was too big, otherwise it just stored the value in SPR_SDR1.
Given that the only function which calls ppc_hash64_set_sdr1() is
ppc_store_sdr1(), why not handle the checking in ppc_store_sdr1() to avoid
the extra function call. Note that ppc_store_sdr1() already stores the
value in SPR_SDR1 anyway, so we were doing it twice.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
[dwg: Remove unnecessary error temporary]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The pseries machine type implements the behaviour of a PAPR compliant
hypervisor, without actually executing such a hypervisor on the virtual
CPU. To do this we need some hooks in the CPU code to make hypervisor
facilities get redirected to the machine instead of emulated internally.
For hypercalls this is managed through the cpu->vhyp field, which points
to a QOM interface with a method implementing the hypercall.
For the hashed page table (HPT) - also a hypervisor resource - we use an
older hack. CPUPPCState has an 'external_htab' field which when non-NULL
indicates that the HPT is stored in qemu memory, rather than within the
guest's address space.
For consistency - and to make some future extensions easier - this merges
the external HPT mechanism into the vhyp mechanism. Methods are added
to vhyp for the basic operations the core hash MMU code needs: map_hptes()
and unmap_hptes() for reading the HPT, store_hpte() for updating it and
hpt_mask() to retrieve its size.
To match this, the pseries machine now sets these vhyp fields in its
existing vhyp class, rather than reaching into the cpu object to set the
external_htab field.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
CPUPPCState includes fields htab_base and htab_mask which store the base
address (GPA) and size (as a mask) of the guest's hashed page table (HPT).
These are set when the SDR1 register is updated.
Keeping these in sync with the SDR1 is actually a little bit fiddly, and
probably not useful for performance, since keeping them expands the size of
CPUPPCState. It also makes some upcoming changes harder to implement.
This patch removes these fields, in favour of calculating them directly
from the SDR1 contents when necessary.
This does make a change to the behaviour of attempting to write a bad value
(invalid HPT size) to the SDR1 with an mtspr instruction. Previously, the
bad value would be stored in SDR1 and could be retrieved with a later
mfspr, but the HPT size as used by the softmmu would be, clamped to the
allowed values. Now, writing a bad value is treated as a no-op. An error
message is printed in both new and old versions.
I'm not sure which behaviour, if either, matches real hardware. I don't
think it matters that much, since it's pretty clear that if an OS writes
a bad value to SDR1, it's not going to boot.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Accesses to the hashed page table (HPT) are complicated by the fact that
the HPT could be in one of three places:
1) Within guest memory - when we're emulating a full guest CPU at the
hardware level (e.g. powernv, mac99, g3beige)
2) Within qemu, but outside guest memory - when we're emulating user and
supervisor instructions within TCG, but instead of emulating
the CPU's hypervisor mode, we just emulate a hypervisor's behaviour
(pseries in TCG or KVM-PR)
3) Within the host kernel - a pseries machine using KVM-HV
acceleration. Mostly accesses to the HPT are handled by KVM,
but there are a few cases where qemu needs to access it via a
special fd for the purpose.
In order to batch accesses to the fd in case (3), we use a somewhat awkward
ppc_hash64_start_access() / ppc_hash64_stop_access() pair, which for case
(3) reads / releases several HPTEs from the kernel as a batch (usually a
whole PTEG). For cases (1) & (2) it just returns an address value. The
actual HPTE load helpers then need to interpret the returned token
differently in the 3 cases.
This patch keeps the same basic structure, but simplfiies the details.
First start_access() / stop_access() are renamed to map_hptes() and
unmap_hptes() to make their operation more obvious. Second, map_hptes()
now always returns a qemu pointer, which can always be used in the same way
by the load_hpte() helpers. In case (1) it comes from address_space_map()
in case (2) directly from qemu's HPT buffer and in case (3) from a
temporary buffer read from the KVM fd.
While we're at it, make things a bit more consistent in terms of types and
variable names: avoid variables named 'index' (it shadows index(3) which
can lead to confusing results), use 'hwaddr ptex' for HPTE indices and
uint64_t for each of the HPTE words, use ptex throughout the call stack
instead of pte_offset in some places (we still need that at the bottom
layer, but nowhere else).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
At present the SDR1 register - the base of the system's hashed page table
(HPT) - is represented as an SPR with supervisor read and write permission.
However, on CPUs which have a hypervisor mode, the SDR1 is a hypervisor
only resource. Change the permission checking on the SPR to reflect this.
Now that this is done, we don't need to check for an external HPT executing
mtsdr1: an external HPT only applies when we're emulating the behaviour of
a hypervisor, rather than modelling the CPU's hypervisor mode internally,
so if we're permitted to execute mtsdr1, we don't have an external HPT.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
cpu_ppc_set_papr() sets up various aspects of CPU state for use with PAPR
paravirtualized guests. However, it doesn't set the virtual hypervisor,
so callers must also call cpu_ppc_set_vhyp() so that PAPR hypercalls are
handled properly. This is a bit silly, so fold setting the virtual
hypervisor into cpu_ppc_set_papr().
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
When a 'pseries' guest is running with KVM-HV, the guest's hashed page
table (HPT) is stored within the host kernel, so it is not directly
accessible to qemu. Most of the time, qemu doesn't need to access it:
we're using the hardware MMU, and KVM itself implements the guest
hypercalls for manipulating the HPT.
However, qemu does need access to the in-KVM HPT to implement
get_phys_page_debug() for the benefit of the gdbstub, and maybe for
other debug operations.
To allow this, 7c43bca "target-ppc: Fix page table lookup with kvm
enabled" added kvmppc_hash64_read_pteg() to target/ppc/kvm.c to read
in a batch of HPTEs from the KVM table. Unfortunately, there are a
couple of problems with this:
First, the name of the function implies it always reads a whole PTEG
from the HPT, but in fact in some cases it's used to grab individual
HPTEs (which ends up pulling 8 HPTEs, not aligned to a PTEG from the
kernel).
Second, and more importantly, the code to read the HPTEs from KVM is
simply wrong, in general. The data from the fd that KVM provides is
designed mostly for compact migration rather than this sort of one-off
access, and so needs some decoding for this purpose. The current code
will work in some cases, but if there are invalid HPTEs then it will
not get sane results.
This patch rewrite the HPTE reading function to have a simpler
interface (just read n HPTEs into a caller provided buffer), and to
correctly decode the stream from the kernel.
For consistency we also clean up the similar function for altering
HPTEs within KVM (introduced in c138593 "target-ppc: Update
ppc_hash64_store_hpte to support updating in-kernel htab").
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Removes duplicate code and will be useful for consolidating flags
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Add gicv3state void pointer to CPUARMState struct
to store GICv3CPUState.
In case of usecase like CPU reset, we need to reset
GICv3CPUState of the CPU. In such scenario, this pointer
becomes handy.
Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@cavium.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Message-id: 1487850673-26455-5-git-send-email-vijay.kilari@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* raspi2: implement RNG module
* raspi2: implement new SD card controller (but don't wire it up)
* sdhci: bugfixes for block transfers
* virt: fix cpu object reference leak
* Add missing fp_access_check() to aarch64 crypto instructions
* cputlb: Don't assume do_unassigned_access() never returns
* virt: Add a user option to disallow ITS instantiation
* i.MX timers: fix reset handling
* ARMv7M NVIC: rewrite to fix broken priority handling and masking
* exynos: Fix proper mapping of CPUs by providing real cluster ID
* exynos: Fix Linux kernel division by zero for PLLs
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABCAAGBQJYtW/TAAoJEDwlJe0UNgzezv4P/3j+WOVgVlNL8AQ3RFEzzzz4
IszdrQIFcZ5ICT3MDgH/JMjkpj/C13eGo9eiIFlOvVjtsLlneW10frEB6SGP4ype
KpFDHji0cm9MT7gdbgbWbextGU8w7xWV43JmSmEuOxkF/r64u/Ap3CXudB58A+Rv
NvbJMHkkR5Q0MIDA4EkOCLn/Ihh78sd99p8+EV3Gu89KiiB4xRf9D3k/O+Sdh58L
yvPNat0tjJolzZkAUf6RieFN1F7oBXazR13+E8fDy5OTr25K+S7mehBwSJtQ7dGo
VjhR7eMJdyyzi+l+OezQFCUmZI9pENcDdhspSl2mOkPRrQi4gwjEszPcmcNhCNGQ
mguQjk7f5KHtLDDzL1HFr+4sKZdoptXZC18JupjN9oCHJvMq4MDHJaUH0bwrHals
GhE7cM3aNg8ItJu694ruMLY13Z0+B+TmSLFktRYrjJe3qJEfOQE4EKWXXUZaEe5j
L13HPP4nInAUU7kvpuepiYHiR4zBTTgEqRBVdQ/qCkLSuO/EH2TbT9u6pifAtI1S
OkBidnbatWflUwLMMa6jt7ZUx+yDsH7y7C1WxmytnPzKudMMOZ5MxI54yLgEEFTs
SoelwzfSZb2PlOw3h3UwyRDz3CehkDMUMqzIoqF7Wn/UVb6GHvldq/eVpKOOxtG7
nVTTYBFuSil0LV/LST4X
=3qLp
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20170228' into staging
target-arm queue:
* raspi2: implement RNG module
* raspi2: implement new SD card controller (but don't wire it up)
* sdhci: bugfixes for block transfers
* virt: fix cpu object reference leak
* Add missing fp_access_check() to aarch64 crypto instructions
* cputlb: Don't assume do_unassigned_access() never returns
* virt: Add a user option to disallow ITS instantiation
* i.MX timers: fix reset handling
* ARMv7M NVIC: rewrite to fix broken priority handling and masking
* exynos: Fix proper mapping of CPUs by providing real cluster ID
* exynos: Fix Linux kernel division by zero for PLLs
# gpg: Signature made Tue 28 Feb 2017 12:40:51 GMT
# gpg: using RSA key 0x3C2525ED14360CDE
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>"
# gpg: aka "Peter Maydell <pmaydell@gmail.com>"
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>"
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* remotes/pmaydell/tags/pull-target-arm-20170228: (27 commits)
hw/arm/exynos: Fix proper mapping of CPUs by providing real cluster ID
hw/arm/exynos: Fix Linux kernel division by zero for PLLs
bcm2835_sdhost: add bcm2835 sdhost controller
armv7m: Allow SHCSR writes to change pending and active bits
armv7m: Raise correct kind of UsageFault for attempts to execute ARM code
armv7m: Check exception return consistency
armv7m: Extract "exception taken" code into functions
armv7m: VECTCLRACTIVE and VECTRESET are UNPREDICTABLE
armv7m: Simpler and faster exception start
armv7m: Remove unused armv7m_nvic_acknowledge_irq() return value
armv7m: Escalate exceptions to HardFault if necessary
arm: gic: Remove references to NVIC
armv7m: Fix condition check for taking exceptions
armv7m: Rewrite NVIC to not use any GIC code
armv7m: Implement reading and writing of PRIGROUP
armv7m: Rename nvic_state to NVICState
ARM i.MX timers: fix reset handling
hw/arm/virt: Add a user option to disallow ITS instantiation
cputlb: Don't assume do_unassigned_access() never returns
Add missing fp_access_check() to aarch64 crypto instructions
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
M profile doesn't implement ARM, and the architecturally required
behaviour for attempts to execute with the Thumb bit clear is to
generate a UsageFault with the CFSR INVSTATE bit set. We were
incorrectly implementing this as generating an UNDEFINSTR UsageFault;
fix this.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Implement the exception return consistency checks
described in the v7M pseudocode ExceptionReturn().
Inspired by a patch from Michael Davidsaver's series, but
this is a reimplementation from scratch based on the
ARM ARM pseudocode.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Extract the code from the tail end of arm_v7m_do_interrupt() which
enters the exception handler into a pair of utility functions
v7m_exception_taken() and v7m_push_stack(), which correspond roughly
to the pseudocode PushStack() and ExceptionTaken().
This also requires us to move the arm_v7m_load_vector() utility
routine up so we can call it.
Handling illegal exception returns has some cases where we want to
take a UsageFault either on an existing stack frame or with a new
stack frame but with a specific LR value, so we want to be able to
call these without having to go via arm_v7m_cpu_do_interrupt().
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
All the places in armv7m_cpu_do_interrupt() which pend an
exception in the NVIC are doing so for synchronous
exceptions. We know that we will always take some
exception in this case, so we can just acknowledge it
immediately, rather than returning and then immediately
being called again because the NVIC has raised its outbound
IRQ line.
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
[PMM: tweaked commit message; added DEBUG to the set of
exceptions we handle immediately, since it is synchronous
when it results from the BKPT instruction]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Having armv7m_nvic_acknowledge_irq() return the new value of
env->v7m.exception and its one caller assign the return value
back to env->v7m.exception is pointless. Just make the return
type void instead.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
The v7M exception architecture requires that if a synchronous
exception cannot be taken immediately (because it is disabled
or at too low a priority) then it should be escalated to
HardFault (and the HardFault exception is then taken).
Implement this escalation logic.
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
[PMM: extracted from another patch]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
The M profile condition for when we can take a pending exception or
interrupt is not the same as that for A/R profile. The code
originally copied from the A/R profile version of the
cpu_exec_interrupt function only worked by chance for the
very simple case of exceptions being masked by PRIMASK.
Replace it with a call to a function in the NVIC code that
correctly compares the priority of the pending exception
against the current execution priority of the CPU.
[Michael Davidsaver's patchset had a patch to do something
similar but the implementation ended up being a rewrite.]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
The aarch64 crypto instructions for AES and SHA are missing the
check for if the FPU is enabled.
Signed-off-by: Nick Reilly <nreilly@blackberry.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Note that x86_64 has only _rt signal handlers. This implementation
attempts to share code with the x86_32 implementation.
CC: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Allan Wirth <awirth@akamai.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20170226165345.8757-1-bobby.prani@gmail.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
This keeps the same results on type=static expansion, but make
type=full expansion return every single QOM property on the CPU
object that have a different value from the "base' CPU model,
plus all the CPU feature flag properties.
Cc: Jiri Denemark <jdenemar@redhat.com>
Message-Id: <20170222190029.17243-4-ehabkost@redhat.com>
Tested-by: Jiri Denemark <jdenemar@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Implement query-cpu-model-expansion for target-i386.
This should meet all the requirements while being simple. In the
case of static expansion, it will use the new "base" CPU model,
and in the case of full expansion, it will keep the original CPU
model name+props, and append extra properties.
A future follow-up should improve the implementation of
type=full, so that it returns more detailed data, including every
writable QOM property in the CPU object.
Cc: libvir-list@redhat.com
Cc: Jiri Denemark <jdenemar@redhat.com>
Message-Id: <20170222190029.17243-3-ehabkost@redhat.com>
Tested-by: Jiri Denemark <jdenemar@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
The query-cpu-model-expand QMP command needs at least one static
model, to allow the "static" expansion mode to be implemented.
Instead of defining static versions of every CPU model, define a
"base" CPU model that has absolutely no feature flag enabled.
Despite having no CPUID data set at all, "-cpu base" is even a
functional CPU:
* It can boot a Slackware Linux 1.01 image with a Linux 0.99.12
kernel[1].
* It is even possible to boot[2] a modern Fedora x86_64 guest by
manually enabling the following CPU features:
-cpu base,+lm,+msr,+pae,+fpu,+cx8,+cmov,+sse,+sse2,+fxsr
[1] http://www.qemu-advent-calendar.org/2014/#day-1
[2] This is what can be seen in the guest:
[root@localhost ~]# cat /proc/cpuinfo
processor : 0
vendor_id : unknown
cpu family : 0
model : 0
model name : 00/00
stepping : 0
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 1
wp : yes
flags : fpu msr pae cx8 cmov fxsr sse sse2 lm nopl
bugs :
bogomips : 5832.70
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
[root@localhost ~]# x86info -v -a
x86info v1.30. Dave Jones 2001-2011
Feedback to <davej@redhat.com>.
No TSC, MHz calculation cannot be performed.
Unknown vendor (0)
MP Table:
Family: 0 Model: 0 Stepping: 0
CPU Model (x86info's best guess):
eax in: 0x00000000, eax = 00000001 ebx = 00000000 ecx = 00000000 edx = 00000000
eax in: 0x00000001, eax = 00000000 ebx = 00000800 ecx = 00000000 edx = 07008161
eax in: 0x80000000, eax = 80000001 ebx = 00000000 ecx = 00000000 edx = 00000000
eax in: 0x80000001, eax = 00000000 ebx = 00000000 ecx = 00000000 edx = 20000000
Feature flags:
fpu Onboard FPU
msr Model-Specific Registers
pae Physical Address Extensions
cx8 CMPXCHG8 instruction
cmov CMOV instruction
fxsr FXSAVE and FXRSTOR instructions
sse SSE support
sse2 SSE2 support
Long NOPs supported: yes
Address sizes : 0 bits physical, 0 bits virtual
0MHz processor (estimate).
running at an estimated 0MHz
[root@localhost ~]#
Message-Id: <20170222190029.17243-2-ehabkost@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Tested-by: Jiri Denemark <jdenemar@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Host CPUID info is used by the "max" CPU model only in KVM mode.
Move the initialization of CPUID data for "max" from class_init
to instance_init, and don't set CPUClass::cpu_def for "max".
Message-Id: <20170222183919.11928-4-ehabkost@redhat.com>
Tested-by: Richard W.M. Jones <rjones@redhat.com>
Tested-by: Jiri Denemark <jdenemar@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Instead of reporting host CPUID data on "max", use the qemu64 CPU
model as reference to initialize CPUID
vendor/family/model/stepping/model-id.
Message-Id: <20170222183919.11928-3-ehabkost@redhat.com>
Tested-by: Richard W.M. Jones <rjones@redhat.com>
Tested-by: Jiri Denemark <jdenemar@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Rename the existing "host" CPU model to "max, and set it to
kvm_enabled=false. The new "max" CPU model will be able to enable
all features supported by TCG out of the box, because its logic
is based on x86_cpu_get_supported_feature_word(), which already
works with TCG.
A new KVM-specific "host" class was added, that simply inherits
everything from "max" except the 'ordering' and 'description'
fields.
Message-Id: <20170222183919.11928-2-ehabkost@redhat.com>
Tested-by: Richard W.M. Jones <rjones@redhat.com>
Tested-by: Jiri Denemark <jdenemar@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
CPU runnability checks and CPU model expansion have slightly
different requirements. Document the steps involved in loading a
CPU model and realizing a CPU, so their requirements and purpose
are clearly defined.
This patch doesn't change any implementation. It just add
comments, rename the x86_cpu_load_features() function for clarity
(so it won't be confused with x86_cpu_load_def()), and move
x86_cpu_filter_features() closer to it.
Message-Id: <20170116211124.29245-2-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Rename the field and add a small comment to make its purpose
clearer.
Message-Id: <20170119210449.11991-4-ehabkost@redhat.com>
Tested-by: Jiri Denemark <jdenemar@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Instead of using kvm_enabled to order the "-cpu help" list, use a
new "ordering" field for that.
Message-Id: <20170119210449.11991-3-ehabkost@redhat.com>
Tested-by: Jiri Denemark <jdenemar@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
The class is now safe because the assert(kvm_enabled()) line was
removed by commit e435601058.
Message-Id: <20170119210449.11991-2-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
- cleanups, fixes and improvements
- program check loop detection (useful with the corresponding kernel
patch)
- wire up virtio-crypto for ccw
- and finally support many virtqueues for virtio-ccw
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJYr/qXAAoJEN7Pa5PG8C+vzSYP+wR/mA4wmXh0Jj8zzxeJaeQa
UNNwR7Ege4KjdL0DKXw/Uy2S/H2qGZvD4cb3JLIwp15BSilmcxRGS+v18ooBuRtx
X8+W2peH/Ldk4SAGbfXyRR4EXom4ZmmHtgdoWYPUhgq2BimH1vBcY06uHOkJ4zTP
vBfpmvKL53SjjHF6b9NmlprSDrn8cbQgqqxTWc0YL0aEcFTcxpBfr98dCfrNfk8b
k6f324hY+3YC7rdvLAsBx3tNjDmEoEh4aidGyECKOWiy2Bt2hQ/ZhxVUk7cFV30M
F0mttRJSxuBY9xYfmuxTKkm2ttIH0BiOhFmE5+YEj7ot+iqBslyYHR2prkZC66v+
wQ9Ynx8ys0ec/IkHx2uIt8iOdAiq/K5gJkyjEw6ekg70OOGrTtyv5y6G9FOc4B4W
ms7eUnhIgr5rEv/oQgCSgCUlAUm6MWW/BtffqmKZ7M2/7l8Y3T1U4f9383sKZtIT
7xr/AtV30yH695r+bllEljIjgMU5EWUDpA2kBCC6tzJQ0KYSoICSGloxKNEK3Z6X
EsYby7YjLArTlvsLJ4y2k/BPzcM4IYJX9NDjCmMRpR2I46Nb35uwR73EZx6JS6fw
dKmdx0qSZbaMbmwIJZzVz4kzG9z6gePkCvaEmPa99ZgnaZ0igm4y5W6Q8fLEn1Jz
zy277Wciim5mnZWuJAbl
=uM3N
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/cohuck/tags/s390x-20170224' into staging
A selection of s390x patches:
- cleanups, fixes and improvements
- program check loop detection (useful with the corresponding kernel
patch)
- wire up virtio-crypto for ccw
- and finally support many virtqueues for virtio-ccw
# gpg: Signature made Fri 24 Feb 2017 09:19:19 GMT
# gpg: using RSA key 0xDECF6B93C6F02FAF
# gpg: Good signature from "Cornelia Huck <huckc@linux.vnet.ibm.com>"
# gpg: aka "Cornelia Huck <cornelia.huck@de.ibm.com>"
# Primary key fingerprint: C3D0 D66D C362 4FF6 A8C0 18CE DECF 6B93 C6F0 2FAF
* remotes/cohuck/tags/s390x-20170224:
s390x/css: handle format-0 TIC CCW correctly
s390x/arch_dump: pass cpuid into notes sections
s390x/arch_dump: use proper note name and note size
virtio-ccw: support VIRTIO_QUEUE_MAX virtqueues
s390x: bump ADAPTER_ROUTES_MAX_GSI
virtio-ccw: check flic->adapter_routes_max_batch
s390x: add property adapter_routes_max_batch
virtio-ccw: Check the number of vqs in CCW_CMD_SET_IND
virtio-ccw: add virtio-crypto-ccw device
virtio-ccw: handle virtio 1 only devices
s390x/flic: fail migration on source already
s390x/kvm: detect some program check loops
s390x/s390-virtio: get rid of DPRINTF
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJYrzrdAAoJEDhwtADrkYZTjJIP/0pjdtvdCdIq855DSgHO6jdm
xM3W1DngCHt78LPXKutqRX8le2tFuY7uXQG2PqTXtirni5WKB9MB4wdOzeucrXvW
NIpb8AC1GM0ToOwQvrc8T840QfdGFFE8X2DIAPPYcBivQWAlRi6tqbGY8KwvWFYC
vSjUIJbIu8mOUt49Q5LfhaH2UPJlcNlED/oUmDLorz9Rz6E75EHP5pKPrr7Y0JkX
4wjxSE18yM4z0wXwIfRiW5zKDs7JuvHwLSVX75ZwpS5GpWgGsyc4EeyAQb5Xi97s
/S3a4SFj/kOvDeuw8uy7BkuPknYF2hHD6XUkoIWr2hiyjatjeAM9USO18N2HwoIg
rO3Icw1HADC8Z/RrXjKecaLAA+gKNFGAMgmrYH0GImg6QfC24VQdHNXm3v+i1i27
1p6AnSdtzG26DPk8pJODn2UbxngoeHUy0PPjra7ZEMDK7Igkw8x0oFBL887jPxkd
oyBA5S4dOUCYXo86hYjjrhRXrQ7j5oIx45myX8jX6RoQLEq1XrUiVN59t1WXW5Vv
EiZc7YB/QSPMZe/q0DiWvif4DH+YLAqD9k7OeQ2HjINBf5sKqIn3lGO0zloL2W+8
mQ9+HLryP4j+PRbtpQavnAYg6Mo7Qzg9y5ZkABAj0fdyPJFZ8ZZ7gH0+atF1CjlP
Vvv+VdyWpzKfFam752Hk
=o8eA
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/armbru/tags/pull-util-2017-02-23' into staging
option cutils: Fix and clean up number conversions
# gpg: Signature made Thu 23 Feb 2017 19:41:17 GMT
# gpg: using RSA key 0x3870B400EB918653
# gpg: Good signature from "Markus Armbruster <armbru@redhat.com>"
# gpg: aka "Markus Armbruster <armbru@pond.sub.org>"
# Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653
* remotes/armbru/tags/pull-util-2017-02-23: (24 commits)
option: Fix checking of sizes for overflow and trailing crap
util/cutils: Change qemu_strtosz*() from int64_t to uint64_t
util/cutils: Return qemu_strtosz*() error and value separately
util/cutils: Let qemu_strtosz*() optionally reject trailing crap
qemu-img: Wrap cvtnum() around qemu_strtosz()
test-cutils: Drop suffix from test_qemu_strtosz_simple()
test-cutils: Use qemu_strtosz() more often
util/cutils: Drop QEMU_STRTOSZ_DEFSUFFIX_* macros
util/cutils: New qemu_strtosz()
util/cutils: Rename qemu_strtosz() to qemu_strtosz_MiB()
util/cutils: New qemu_strtosz_metric()
test-cutils: Cover qemu_strtosz() around range limits
test-cutils: Cover qemu_strtosz() with trailing crap
test-cutils: Cover qemu_strtosz() invalid input
test-cutils: Add missing qemu_strtosz()... endptr checks
option: Fix to reject invalid and overflowing numbers
util/cutils: Clean up control flow around qemu_strtol() a bit
util/cutils: Clean up variable names around qemu_strtol()
util/cutils: Rename qemu_strtoll(), qemu_strtoull()
util/cutils: Rewrite documentation of qemu_strtol() & friends
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This enables the multi-threaded system emulation by default for ARMv7
and ARMv8 guests using the x86_64 TCG backend. This is because on the
guest side:
- The ARM translate.c/translate-64.c have been converted to
- use MTTCG safe atomic primitives
- emit the appropriate barrier ops
- The ARM machine has been updated to
- hold the BQL when modifying shared cross-vCPU state
- defer powerctl changes to async safe work
All the host backends support the barrier and atomic primitives but
need to provide same-or-better support for normal load/store
operations.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Acked-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Pranith Kumar <bobby.prani@gmail.com>
Reviewed-by: Pranith Kumar <bobby.prani@gmail.com>
Previously flushes on other vCPUs would only get serviced when they
exited their TranslationBlocks. While this isn't overly problematic it
violates the semantics of TLB flush from the point of view of source
vCPU.
To solve this we call the cputlb *_all_cpus_synced() functions to do
the flushes which ensures all flushes are completed by the time the
vCPU next schedules its own work. As the TLB instructions are modelled
as CP writes the TB ends at this point meaning cpu->exit_request will
be checked before the next instruction is executed.
Deferring the work until the architectural sync point is a possible
future optimisation.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
The WFE and YIELD instructions are really only hints and in TCG's case
they were useful to move the scheduling on from one vCPU to the next. In
the parallel context (MTTCG) this just causes an unnecessary cpu_exit
and contention of the BQL.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
When switching a new vCPU on we want to complete a bunch of the setup
work before we start scheduling the vCPU thread. To do this cleanly we
defer vCPU setup to async work which will run the vCPUs execution
context as the thread is woken up. The scheduling of the work will kick
the vCPU awake.
This avoids potential races in MTTCG system emulation.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
While the vargs approach was flexible the original MTTCG ended up
having munge the bits to a bitmap so the data could be used in
deferred work helpers. Instead of hiding that in cputlb we push the
change to the API to make it take a bitmap of MMU indexes instead.
For ARM some the resulting flushes end up being quite long so to aid
readability I've tended to move the index shifting to a new line so
all the bits being or-ed together line up nicely, for example:
tlb_flush_page_by_mmuidx(other_cs, pageaddr,
(1 << ARMMMUIdx_S1SE1) |
(1 << ARMMMUIdx_S1SE0));
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
[AT: SPARC parts only]
Reviewed-by: Artyom Tarasenko <atar4qemu@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
[PM: ARM parts only]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
This finally allows TCG to benefit from the iothread introduction: Drop
the global mutex while running pure TCG CPU code. Reacquire the lock
when entering MMIO or PIO emulation, or when leaving the TCG loop.
We have to revert a few optimization for the current TCG threading
model, namely kicking the TCG thread in qemu_mutex_lock_iothread and not
kicking it in qemu_cpu_kick. We also need to disable RAM block
reordering until we have a more efficient locking mechanism at hand.
Still, a Linux x86 UP guest and my Musicpal ARM model boot fine here.
These numbers demonstrate where we gain something:
20338 jan 20 0 331m 75m 6904 R 99 0.9 0:50.95 qemu-system-arm
20337 jan 20 0 331m 75m 6904 S 20 0.9 0:26.50 qemu-system-arm
The guest CPU was fully loaded, but the iothread could still run mostly
independent on a second core. Without the patch we don't get beyond
32206 jan 20 0 330m 73m 7036 R 82 0.9 1:06.00 qemu-system-arm
32204 jan 20 0 330m 73m 7036 S 21 0.9 0:17.03 qemu-system-arm
We don't benefit significantly, though, when the guest is not fully
loading a host CPU.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Message-Id: <1439220437-23957-10-git-send-email-fred.konrad@greensocs.com>
[FK: Rebase, fix qemu_devices_reset deadlock, rm address_space_* mutex]
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
[EGC: fixed iothread lock for cpu-exec IRQ handling]
Signed-off-by: Emilio G. Cota <cota@braap.org>
[AJB: -smp single-threaded fix, clean commit msg, BQL fixes]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Pranith Kumar <bobby.prani@gmail.com>
[PM: target-arm changes]
Acked-by: Peter Maydell <peter.maydell@linaro.org>
This pull request has:
* Yet more POWER9 instruction implementations
* Some extensions to the softfloat code which are necesssary for
some of those instructions
* Some preliminary patches in preparation for POWER9 softmmu
implementation
* Igor Mammedov's cleanups to unify hotplug cpu handling across
architectures
* Assorted bugfixes
The softfloat and cpu hotplug changes aren't entirely ppc specific (in
fact the hotplug stuff contains some pc specific patches). However
they're included here because ppc is one of the main beneficiaries,
and the series depend on some ppc specific patches.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABCAAGBQJYrS/bAAoJEGw4ysog2bOSJ3MP/AjoTGTP5MHPwWBZKAxpEtie
vEXudVbOelr3QV06vHMH4YHVAncuzt9Hmz/RDgs5Uynp4vLdmEo5IdiFP9PMjrFg
oMAndku9icU8PG+XNF5pNrKy10n6k8dVRBR/19UxnRWMuxywOZO208WkICF/6kDK
IpFT96MubqbReLcVhdl2N8d2rP7/lRQmz6aPxhRLFBuAe8iheAQLq/QeZLIZaWEJ
i4mPWVu/CDYP9nMAgv56MW0yY5p2o5MCh+f80+7jvKXZBoeo83KOTaZeZbGb/byr
rCfyLTR24tj6WUGRvzyB+FJ8rbWKcox4UCx17239gAjXtLxhlYaQDo28S5gwinpQ
b/CaEgb8x2kl97tZT/M1mamr7PdFxachCA20oizguwFJ9oeukAPUvkVBpEtVYK8K
a+VrRHxVJwSi/ZD3N6WRZMXR4D+Oc8DcXoEzMu4CFtIzQ/WJroZCa4JCcdv4N1nw
9u1m+C2QbQ9sGBtTSGCy0KZyT3sZHoFT6aD4zpkV7s3BJKk+AXSLRpL4z8FP2sDB
Wh/Qk5q06P1pPZzvuU9QJmrpIE9EFcOQW4IQhyViut+BXzBlp7cWxeGcPM5PuJ7V
6FcMSchZeVOiLi9Y51csluDrecTKIQ3yFEgLW7j50Lg/WqmdwlwkcW39MzlWgjgQ
OIoVgvGmGovPTGIIYyY9
=bsJJ
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-2.9-20170222' into staging
ppc patch queue for 2017-02-22
This pull request has:
* Yet more POWER9 instruction implementations
* Some extensions to the softfloat code which are necesssary for
some of those instructions
* Some preliminary patches in preparation for POWER9 softmmu
implementation
* Igor Mammedov's cleanups to unify hotplug cpu handling across
architectures
* Assorted bugfixes
The softfloat and cpu hotplug changes aren't entirely ppc specific (in
fact the hotplug stuff contains some pc specific patches). However
they're included here because ppc is one of the main beneficiaries,
and the series depend on some ppc specific patches.
# gpg: Signature made Wed 22 Feb 2017 06:29:47 GMT
# gpg: using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-2.9-20170222: (43 commits)
hw/ppc/ppc405_uc.c: Avoid integer overflows
hw/ppc/spapr: Check for valid page size when hot plugging memory
target-ppc: fix Book-E TLB matching
hw/net/spapr_llan: 6 byte mac address device tree entry
machine: replace query_hotpluggable_cpus() callback with has_hotpluggable_cpus flag
machine: unify [pc_|spapr_]query_hotpluggable_cpus() callbacks
spapr: reuse machine->possible_cpus instead of cores[]
change CPUArchId.cpu type to Object*
pc: pass apic_id to pc_find_cpu_slot() directly so lookup could be done without CPU object
pc: calculate topology only once when possible_cpus is initialised
pc: move pcms->possible_cpus init out of pc_cpus_init()
machine: move possible_cpus to MachineState
hw/pci-host/prep: Do not use hw_error() in realize function
target/ppc/POWER9: Direct all instr and data storage interrupts to the hypv
target/ppc/POWER9: Adapt LPCR handling for POWER9
target/ppc/POWER9: Add ISAv3.00 MMU definition
target/ppc: Fix LPCR DPFD mask define
target-ppc: Add xscvqpudz and xscvqpuwz instructions
target-ppc: Implement round to odd variants of quad FP instructions
softfloat: Add float128_to_uint32_round_to_zero()
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
we need to pass the cpuid into the pid field of the notes
section, otherwise the notes for different CPUs all have 0:
e.g. objdump -h shows:
old:
5 .reg-s390-prefix/0 00000004 0000000000000000 0000000000000000
6 .reg-s390-prefix 00000004 0000000000000000 0000000000000000
21 .reg-s390-prefix/0 00000004 0000000000000000 0000000000000000
new:
5 .reg-s390-prefix/1 00000004 0000000000000000 0000000000000000
6 .reg-s390-prefix 00000004 0000000000000000 0000000000000000
21 .reg-s390-prefix/2 00000004 0000000000000000 0000000000000000
Reported-by: Philipp Rudo <prudo@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
In binutils/libbfd (bfd/elf.c) it is enforced that all s390
specific ELF notes like e.g. NT_S390_PREFIX or NT_S390_CTRS
have "LINUX" specified as note name and that the namesz is
6. Otherwise the notes are ignored.
QEMU currently uses "CORE" for these notes. Up to now this has
not been a real problem because the dump analysis tool "crash"
does handle that. But it will break all programs that use libbfd
for processing ELF notes.
So fix this and use "LINUX" for all s390 specific notes to comply
with libbfd. Also set the correct namesz.
Reported-by: Philipp Rudo <prudo@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Sometimes (e.g. early boot) a guest is broken in such ways that it loops
100% delivering operation exceptions (illegal operation) but the pgm new
PSW is not set properly. This will result in code being read from
address zero, which usually contains another illegal op. Let's detect
this case and put the guest in crashed state. Instead of only detecting
this for address zero apply a heuristic that will work for any program
check new psw so that it will also reach the crashed state if you
provide some random elf file to the -kernel option.
We do not want guest problem state to be able to trigger a guest panic,
e.g. by faulting on an address that is the same as the program check
new PSW, so we check for the problem state bit being off.
With this we
a: get rid of CPU consumption of such broken guests
b: keep the program old PSW. This allows to find out the original illegal
operation - making debugging such early boot issues much easier than
with single stepping
This relies on the kernel using a similar heuristic and passing such
operation exceptions to user space.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
This will permit its use in parse_option_size().
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com> (maintainer:X86)
Cc: Kevin Wolf <kwolf@redhat.com> (supporter:Block layer core)
Cc: Max Reitz <mreitz@redhat.com> (supporter:Block layer core)
Cc: qemu-block@nongnu.org (open list:Block layer core)
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <1487708048-2131-24-git-send-email-armbru@redhat.com>
This makes qemu_strtosz(), qemu_strtosz_mebi() and
qemu_strtosz_metric() similar to qemu_strtoi64(), except negative
values are rejected.
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com> (maintainer:X86)
Cc: Kevin Wolf <kwolf@redhat.com> (supporter:Block layer core)
Cc: Max Reitz <mreitz@redhat.com> (supporter:Block layer core)
Cc: qemu-block@nongnu.org (open list:Block layer core)
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <1487708048-2131-23-git-send-email-armbru@redhat.com>
Change the qemu_strtosz() & friends to return -EINVAL when @endptr is
null and the conversion doesn't consume the string completely.
Matches how qemu_strtol() & friends work.
Only test_qemu_strtosz_simple() passes a null @endptr. No functional
change there, because its conversion consumes the string.
Simplify callers that use @endptr only to fail when it doesn't point
to '\0' to pass a null @endptr instead.
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com> (maintainer:X86)
Cc: Kevin Wolf <kwolf@redhat.com> (supporter:Block layer core)
Cc: Max Reitz <mreitz@redhat.com> (supporter:Block layer core)
Cc: qemu-block@nongnu.org (open list:Block layer core)
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <1487708048-2131-22-git-send-email-armbru@redhat.com>
To parse numbers with metric suffixes, we use
qemu_strtosz_suffix_unit(nptr, &eptr, QEMU_STRTOSZ_DEFSUFFIX_B, 1000)
Capture this in a new function for legibility:
qemu_strtosz_metric(nptr, &eptr)
Replace test_qemu_strtosz_suffix_unit() by test_qemu_strtosz_metric().
Rename qemu_strtosz_suffix_unit() to do_strtosz() and give it internal
linkage.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <1487708048-2131-15-git-send-email-armbru@redhat.com>
Xtensa core may have a number of RAM and ROM areas configured. Record
their size and location from the core configuration overlay and
instantiate them as RAM regions in the SIM machine.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Changes:
* Add MIPS Boston board support
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.5 (GNU/Linux)
iQIVAwUAWKzWYCI464bV95fCAQLAvA//bCOMN1dcTA9CZHPtHVR0zm7Mm83ree6c
QglgHh3S0mGUbC3MqzO44hsd851QWigAt0zjI7xn4/rtSPS82Bn3+EXDXt+l3+Ll
A58rx3Yjkz9XlLrhTSXlt0Pfltc6G7f1Migmq6scQsJflf5YLXQVyXVuhAwhiFSm
4U0UYXSMtRhU6PtgcjNu+qGn+1qn3DlaiwZcKMUh1X2f5PeCbzIJ47ju4hmFoQUi
qARYU1Ia5dnHUi/8Df1dL+XJo3tafVdmIjfYP0SOuyRVxCs6GIAylV5h92HHgF8w
rTmTlnD4XU2Ef+x4oBvejNLwL0mvW/pYo+VMWuV96kXUxRU5KeDjaQk1tULpxZcu
xcKAqN6xcWllO2v1YomHSkzudby1FPECDPNsLkbvaGBG6mIOPgoAUyHZQT7MxWXN
dhXa3cV770FYck6X4QU2LD9kSJ+8L0ZXGQj0EheQwU6ofJ66DW0//pzpVgAdXjJ4
BssXKdEnHlkpWJw/PWCxt2fUPKzmgRQ9Q6kR5PuzWHNXXBMzoc+ztOR5NfrcrUUT
EpAEHLjoEjzT2wIwUmZccszZiLVT1fe5dGwCI+OgIUTkeq0tQMRL5nVuNufOiFgc
9KpfujUA4LlvgdEEA9HhEfhTQLjM4O0xI82mXN+R68w3mvsJ3ygKl6rd+7XNmmwx
nRvHWith+EI=
=sAzw
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/yongbok/tags/mips-20170222' into staging
MIPS patches 2017-02-22
Changes:
* Add MIPS Boston board support
# gpg: Signature made Wed 22 Feb 2017 00:08:00 GMT
# gpg: using RSA key 0x2238EB86D5F797C2
# gpg: Good signature from "Yongbok Kim <yongbok.kim@imgtec.com>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 8600 4CF5 3415 A5D9 4CFA 2B5C 2238 EB86 D5F7 97C2
* remotes/yongbok/tags/mips-20170222:
hw/mips: MIPS Boston board support
hw: xilinx-pcie: Add support for Xilinx AXI PCIe Controller
loader: Support Flattened Image Trees (FIT images)
dtc: Update requirement to v1.4.2
target-mips: Provide function to test if a CPU supports an ISA
hw/mips_gic: Update pin state on mask changes
hw/mips_gictimer: provide API for retrieving frequency
hw/mips_cmgcr: allow GCR base to be moved
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
On POWER, the valid page sizes that the guest can use are bound
to the CPU and not to the memory region. QEMU already has some
fancy logic to find out the right maximum memory size to tell
it to the guest during boot (see getrampagesize() in the file
target/ppc/kvm.c for more information).
However, once we're booted and the guest is using huge pages
already, it is currently still possible to hot-plug memory regions
that does not support huge pages - which of course does not work
on POWER, since the guest thinks that it is possible to use huge
pages everywhere. The KVM_RUN ioctl will then abort with -EFAULT,
QEMU spills out a not very helpful error message together with
a register dump and the user is annoyed that the VM unexpectedly
died.
To avoid this situation, we should check the page size of hot-plugged
DIMMs to see whether it is possible to use it in the current VM.
If it does not fit, we can print out a better error message and
refuse to add it, so that the VM does not die unexpectely and the
user has a second chance to plug a DIMM with a matching memory
backend instead.
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1419466
Signed-off-by: Thomas Huth <thuth@redhat.com>
[dwg: Fix a build error on 32-bit builds with KVM]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The Book-E TLB matching process should bail out early when a TLB
entry matches, but the access permissions are wrong. The CPU
will then raise a DSI error instead of a Data TLB error, as
described for TLB matching in Freescale and IBM documents.
Signed-off-by: Alex Zuepke <azu@sysgo.de>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The vpm0 bit was removed from the LPCR in POWER9, this bit controlled
whether ISI and DSI interrupts were directed to the hypervisor or the
partition. These interrupts now go to the hypervisor irrespective, thus
it is no longer necessary to check the vmp0 bit in the LPCR.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The logical partitioning control register controls a threads operation
based on the partition it is currently executing. Add new definitions and
update the mask used when writing to the LPCR based on the POWER9 spec.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
POWER9 processors implement the mmu as defined in version 3.00 of the ISA.
Add a definition for this mmu model and set the POWER9 cpu model to use
this mmu model.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The DPFD field in the LPCR is 3 bits wide. This has always been defined
as 0x3 << shift which indicates a 2 bit field, which is incorrect.
Correct this.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xscvqpudz: VSX Scalar truncate & Convert Quad-Precision format to
Unsigned Doubleword format
xscvqpuwz: VSX Scalar truncate & Convert Quad-Precision format to
Unsigned Word format
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xsaddqpo: VSX Scalar Add Quad-Precision using round to Odd
xsmulqo: VSX Scalar Multiply Quad-Precision using round to Odd
xsdivqpo: VSX Scalar Divide Quad-Precision using round to Odd
xscvqpdpo: VSX Scalar round & Convert Quad-Precision format to
Double-Precision format using round to Odd
xssqrtqpo: VSX Scalar Square Root Quad-Precision using round to Odd
xssubqpo: VSX Scalar Subtract Quad-Precision using round to Odd
In addition, fix the invalid bitmask in the instruction encoding
of xssqrtqp[o].
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
CC: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Use the available wait instruction implementation.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
slbsync: SLB Synchoronize
The instruction provides an ordering function for the effects of all
slbieg instructions executed by the thread executing the slbsync
instruction.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
slbieg: SLB Invalidate Entry Global
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
stwat: Store Word Atomic
stdat: Store Doubleword Atomic
The instruction includes as function code (5 bits) which gives a detail
on the operation to be performed. The patch implements five such
functions.
Signed-off-by: Balamuruhan S <bala24@linux.vnet.ibm.com>
Signed-off-by: Harish S <harisrir@linux.vnet.ibm.com>
Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
[ implement stdat, use macro and combine both implementation ]
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
lwat: Load Word Atomic
ldat: Load Doubleword Atomic
The instruction includes as function code (5 bits) which gives a detail
on the operation to be performed. The patch implements five such
functions.
Signed-off-by: Balamuruhan S <bala24@linux.vnet.ibm.com>
Signed-off-by: Harish S <harisrir@linux.vnet.ibm.com>
Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
[ combine both lwat/ldat implementation using macro ]
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Provide a new cpu_supports_isa function which allows callers to
determine whether a CPU supports one of the ISA_ flags, by testing
whether the associated struct mips_def_t sets the ISA flags in its
insn_flags field.
An example use of this is to allow boards which generate bootloader code
to determine the properties of the CPU that will be used, for example
whether the CPU is 64 bit or which architecture revision it implements.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: Leon Alrae <leon.alrae@imgtec.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
When running certain HMP commands ("info registers", "info cpustats",
"info tlb", "nmi", "memsave" or dumping virtual memory) with the "none"
machine, QEMU crashes with a segmentation fault. This happens because the
"none" machine does not have any CPUs by default, but these HMP commands
did not check for a valid CPU pointer yet. Add such checks now, so we get
an error message about the missing CPU instead.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1484309555-1935-1-git-send-email-thuth@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Commit 2afbdf8 ("target-i386: exception handling for memory helpers",
2015-09-15) changed tlb_fill's cpu_restore_state+raise_exception_err
to raise_exception_err_ra. After this change, the cpu_restore_state
and raise_exception_err's cpu_loop_exit are merged into
raise_exception_err_ra's cpu_loop_exit_restore.
This actually fixed some bugs, but when SVM is enabled there is a
second path from raise_exception_err_ra to cpu_loop_exit. This is
the VMEXIT path, and now cpu_vmexit is called without a
cpu_restore_state before.
The fix is to pass the retaddr to cpu_vmexit (via
cpu_svm_check_intercept_param). All helpers can now use GETPC() to pass
the correct retaddr, too.
Cc: qemu-stable@nongnu.org
Fixes: 2afbdf8480
Reported-by: Alexander Boettcher <alexander.boettcher@genode-labs.com>
Tested-by: Alexander Boettcher <alexander.boettcher@genode-labs.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
it's not very convenient to use the crash-information property interface,
so provide a CPU class callback to get the guest crash information, and pass
that information in the event
Signed-off-by: Anton Nefedov <anton.nefedov@virtuozzo.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
Message-Id: <1487053524-18674-3-git-send-email-den@openvz.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Windows reports BSOD parameters through Hyper-V crash MSRs. This
information is very useful for initial crash analysis and thus
it would be nice to have a way to fetch it.
Signed-off-by: Anton Nefedov <anton.nefedov@virtuozzo.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
Message-Id: <1487053524-18674-2-git-send-email-den@openvz.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The HW does not special-case r0, but the ABI specifies that r0 should
contain 0. If we expose this fact to the optimizer, we can simplify
a lot of the generated code. We must of course verify that r0==0, but
that is trivial to do with a TB flag.
Signed-off-by: Richard Henderson <rth@twiddle.net>
The NPC SPR is really only supposed to be used for FPGA debugging.
It contains the same contents as PC, unless one plays games. Follow
the or1ksim implementation in flushing delayed branch state when it
is changed.
The PPC SPR need not be updated every instruction, merely when we
exit the TB or attempt to read its contents.
Signed-off-by: Richard Henderson <rth@twiddle.net>
This allows the tcg optimizer to see, and fold, all of the
constants involved in a GOT base register load sequence.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Note that the specification for lf.madd.s is confused. It's
the only mention of supposed FPMADDHI/FPMADDLO special registers.
On the other hand, or1ksim implements a somewhat normal non-fused
multiply and add. Mirror that.
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Significantly simplifies the implementation of the use of MAC.
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Not documented as disabled for user mode.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This avoids having to keep merging and extracting the flag from SR.
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Decoding the opcodes in the right order reduces by 100+ lines.
Also, it happens to put the opcodes in the same order as Chapter 17.
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Fix incorrect overflow calculation. Move overflow exception check
to a helper function, to eliminate inline branches. Remove some
incorrect special casing of R0. Implement multiply inline.
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The architecture manual is consistent in using "I" for signed
fields and "K" for unsigned fields. Mirror that.
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Avoids warnings from unused variables etc.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
I am working on testing instruction emulation patches for the linux
kernel. During testing I found these 2 issues:
- sets DSX (delay slot exception) but never clears it
- EEAR for illegal insns should point to the bad exception (as per
openrisc spec) but its not
This patch fixes these two issues by clearing the DSX flag when not in a
delay slot and by setting EEAR to exception PC when handling illegal
instruction exceptions.
After this patch the openrisc kernel with latest patches boots great on
qemu and instruction emulation works.
Cc: qemu-trivial@nongnu.org
Cc: openrisc@lists.librecores.org
Signed-off-by: Stafford Horne <shorne@gmail.com>
Message-Id: <20170113220028.29687-1-shorne@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The member VMStateField.start is used for two things, partial data
migration for VBUFFER data (basically provide migration for a
sub-buffer) and for locating next in QTAILQ.
The implementation of the VBUFFER feature is broken when VMSTATE_ALLOC
is used. This however goes unnoticed because actually partial migration
for VBUFFER is not used at all.
Let's consolidate the usage of VMStateField.start by removing support
for partial migration for VBUFFER.
Signed-off-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Message-Id: <20170203175217.45562-1-pasic@linux.vnet.ibm.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
This patch contains several fixes to enable vPMU under TCG mode. It
first removes the checking of kvm_enabled() while unsetting
ARM_FEATURE_PMU. With it, the .pmu option can be used to turn on/off vPMU
under TCG mode. Secondly the PMU node of DT table is now created under TCG.
The last fix is to disable the masking of PMUver field of ID_AA64DFR0_EL1.
Signed-off-by: Wei Huang <wei@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1486504171-26807-5-git-send-email-wei@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This patch adds access support for PMINTENSET_EL1.
Signed-off-by: Wei Huang <wei@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1486504171-26807-4-git-send-email-wei@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In order to support Linux perf, which uses PMXEVTYPER register,
this patch adds read/write access support for PMXEVTYPER. The access
is CONSTRAINED UNPREDICTABLE when PMSELR is not 0x1f. Additionally
this patch adds support for PMXEVTYPER_EL0.
Signed-off-by: Wei Huang <wei@redhat.com>
Message-id: 1486504171-26807-3-git-send-email-wei@redhat.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This patch adds support for AArch64 register PMSELR_EL0. The existing
PMSELR definition is revised accordingly.
Signed-off-by: Wei Huang <wei@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: Moved #ifndef CONFIG_USER_ONLY to cover new regdefs]
Message-id: 1486504171-26807-2-git-send-email-wei@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add support for generating the ISS (Instruction Specific Syndrome)
for Data Abort exceptions taken from AArch32. These syndromes are
used by hypervisors for example to trap and emulate memory accesses.
This is the equivalent for AArch32 guests of the work done for AArch64
guests in commit aaa1f954d4.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
In the ARM ldr/str decode path, rather than directly testing
"insn & (1 << 21)" and "insn & (1 << 24)", abstract these
bits out into wbit and pbit local flags. (We will want to
do more tests against them to determine whether we need to
provide syndrome information.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
In BE32 mode, sub-word size watchpoints can fail to trigger because the
address of the access is adjusted in the opcode helpers before being
compared with the watchpoint registers. This patch reverses the address
adjustment before performing the comparison with the help of a new CPUClass
hook.
This version of the patch augments and tidies up comments a little.
Signed-off-by: Julian Brown <julian@codesourcery.com>
Message-id: caaf64ffc72f6ae183015337b7afdbd4b8989cb6.1484929304.git.julian@codesourcery.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Thumb-1 code has some issues in BE32 mode (as currently implemented). In
short, since bytes are swapped within words at load time for BE32
executables, this also swaps pairs of adjacent Thumb-1 instructions.
This patch un-swaps those pairs of instructions again, both for execution,
and for disassembly. (The previous version of the patch always read four
bytes in arm_read_memory_func and then extracted the proper two bytes,
in a probably misguided attempt to match the behaviour of actual hardware
as described by e.g. the ARM9TDMI TRM, section 3.3 "Endian effects for
instruction fetches". It's less complicated to just read the correct
two bytes though.)
Signed-off-by: Julian Brown <julian@codesourcery.com>
Message-id: ca20462a044848000370318a8bd41dd0a4ed273f.1484929304.git.julian@codesourcery.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add a new "cfgend" property which selects whether the CPU resets into
big-endian mode or not. This setting affects whether we reset with
SCTLR_B (ARMv6 and earlier) or SCTLR_EE (ARMv7 and later) set.
Signed-off-by: Julian Brown <julian@codesourcery.com>
Message-id: 11420d1c49636c1790e60578ee996e51f0f0b835.1484929304.git.julian@codesourcery.com
[PMM: use error_report_err() rather than error_report();
move the integratorcp changes to their own patch;
drop an unnecessary extra #include;
rephrase commit message accordingly;
move setting of reset_sctlr above registration of cpregs
so it actually has an effect]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This obsoletes ppc-for-2.9-20170112, which had a MacOS build bug.
This is a long overdue ppc pull request for qemu-2.9. It's been a
long time coming due to some holidays and inconveniently timed
problems with testing. So, there's a lot in here:
* More POWER9 instruction implementations for TCG
* The simpler parts of my CPU compatibility mode cleanup
* This changes behaviour to prefer compatibility modes over
"raW" mode for new machine type versions
* New "40p" machine type which is essentially a modernized and
cleaned up "prep". The intention is that it will replace "prep"
once it has some more testing and polish.
* Add pseries-2.9 machine type
* Implement H_SIGNAL_SYS_RESET hypercall
* Consolidate the two alternate CPU init paths in pseries by
making it always go through CPU core objects to initialize CPU
* A number of bugfixes and cleanups
* Stop the guest timebase when the guest is stopped under KVM.
This makes the guest system clock also stop when paused, which
matches the x86 behaviour.
* Some preliminary cleanups leading towards implementation of the
POWER9 MMU.
There are also some changes not strictly related to ppc code, but for
its benefit:
* Limit the pxi-expander-bridge (PXB) device to x86 guests only
(it's essentially a hack to work around historical x86
limitations)
* Some additions to the 128-bit math in host_utils, necessary for
some of the new instructions.
* Revise a number of qtests and enable them for ppc
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABCAAGBQJYko4AAAoJEGw4ysog2bOStEYQAIk0Pd6ifZzJUcTWQaR8+AZ7
nTbzQyWtSHqSAiwBNsykJMFXV1liZVglf2e+VBsrVOwKoU50VOyVm5LspG2z1h8N
Rxe4FGA2MA//2F3+9/AP8Oe3RdsClNCDaXAVuCFRP4xQWxqqwwasChDeS4Ph/cZq
CXnlhKTpk9v5vSCsr64bUOSYh3RPumnQepiBgT82hOo7R+VaJ79AFbTeCYKkd0hY
Sq8g3mg0zOX1ekNXPk1h8oZWqkoZGbqKiXgoy/evGXWURVzTSJO6VTyM65tdwWB7
Zds77gYAYCIYKq+Iwv4iBCmo4KJofjKQcQepQUr+eGDv9syXebtp6fY0btnIS+DX
uGzzaixZNms9r2+FAiIlKwIeQgQvl76lYEGmvBrbrgSOyA/7GAkOId0E0Ul6D5LW
EJSwk9ZDbyE0JBEq6Bx+LClpwye+bpdScU26djQTTcWpFApIeJTyG9V6b1xwulVZ
rw68ZvfMYxktkvhTbEtvk2O9YZI5eQStBJkmJXeOiOduiP93aiC82MM1Jp+82Q1E
4qRVvCpGTwzF3GLFciUKAqmwfYxByo4G0/dwG8qw6WNEemLyXFHV5TkzLhgwl3kC
gDGl5AdH4MXj8NRjuHcDiGXfePBCD578dmz4xo5ZLA2yBavxkRzM8QsEUmD8hf5w
jhLgyKt0G2hNNtOnGOdG
=vLVl
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-2.9-20170202' into staging
ppc patch queue 2017-02-02
This obsoletes ppc-for-2.9-20170112, which had a MacOS build bug.
This is a long overdue ppc pull request for qemu-2.9. It's been a
long time coming due to some holidays and inconveniently timed
problems with testing. So, there's a lot in here:
* More POWER9 instruction implementations for TCG
* The simpler parts of my CPU compatibility mode cleanup
* This changes behaviour to prefer compatibility modes over
"raW" mode for new machine type versions
* New "40p" machine type which is essentially a modernized and
cleaned up "prep". The intention is that it will replace "prep"
once it has some more testing and polish.
* Add pseries-2.9 machine type
* Implement H_SIGNAL_SYS_RESET hypercall
* Consolidate the two alternate CPU init paths in pseries by
making it always go through CPU core objects to initialize CPU
* A number of bugfixes and cleanups
* Stop the guest timebase when the guest is stopped under KVM.
This makes the guest system clock also stop when paused, which
matches the x86 behaviour.
* Some preliminary cleanups leading towards implementation of the
POWER9 MMU.
There are also some changes not strictly related to ppc code, but for
its benefit:
* Limit the pxi-expander-bridge (PXB) device to x86 guests only
(it's essentially a hack to work around historical x86
limitations)
* Some additions to the 128-bit math in host_utils, necessary for
some of the new instructions.
* Revise a number of qtests and enable them for ppc
# gpg: Signature made Thu 02 Feb 2017 01:40:16 GMT
# gpg: using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-2.9-20170202: (107 commits)
hw/ppc/pnv: Use error_report instead of hw_error if a ROM file can't be found
ppc/kvm: Handle the "family" CPU via alias instead of registering new types
target/ppc/mmu_hash64: Fix incorrect shift value in amr calculation
target/ppc/mmu_hash64: Fix printing unsigned as signed int
tcg/POWER9: NOOP the cp_abort instruction
target/ppc/debug: Print LPCR register value if register exists
target-ppc: Add xststdc[sp, dp, qp] instructions
target-ppc: Add xvtstdc[sp,dp] instructions
target-ppc: Add MMU model check for booke machines
ppc: switch to constants within BUILD_BUG_ON
target/ppc/cpu-models: Fix/remove bad CPU aliases
target/ppc: Remove unused POWERPC_FAMILY(POWER)
spapr: clock should count only if vm is running
ppc: Remove unused function cpu_ppc601_rtc_init()
target/ppc: Add pcr_supported to POWER9 cpu class definition
powerpc/cpu-models: rename ISAv3.00 logical PVR definition
target-ppc: Add xvcv[hpsp, sphp] instructions
target-ppc: Add xsmulqp instruction
target-ppc: Add xsdivqp instruction
target-ppc: Add xscvsdqp and xscvudqp instructions
...
# Conflicts:
# hw/pci-bridge/Makefile.objs
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When running with KVM on POWER, we are registering a "family" CPU
type for the host CPU that we are running on. For example, on all
POWER8-compatible hosts, we register a "POWER8" CPU type, so that
you can always start QEMU with "-cpu POWER8" there, without the
need to know whether you are running on a POWER8, POWER8E or POWER8NVL
host machine.
However, we also have a "POWER8" CPU alias in the ppc_cpu_aliases list
(that is mainly useful for TCG). This leads to two cosmetical drawbacks:
If the user runs QEMU with "-cpu ?", we always claim that POWER8 is an
"alias for POWER8_v2.0" - which is simply not true when running with
KVM on POWER. And when using the 'query-cpu-definitions' QMP call,
there are currently two entries for "POWER8", one for the alias, and
one for the additional registered type.
To solve the two problems, we should rather update the "family" alias
instead of registering a new types. We then only have one "POWER8"
CPU definition around, an alias, which also points to the right
destination.
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1396536
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We are calculating the authority mask register key value wrong.
The pte entry contains the key value with the two upper bits and the three
lower bits stored separately. We should use these two portions to get a 5
bit value, not or them together which will only give us a 3 bit value.
Fix this.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We were printing an unsigned value as a signed value, fix this.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The cp_abort instruction is used to remove the state of an in progress
copy paste sequence. POWER9 compilers add this in various places, such
as context switches which causes illegal instruction signals since we
don't yet implement this instruction.
Given there is no implementation of the copy paste facility and that we
don't claim to support it, we can just noop this instruction.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
It can be useful when debugging to print the LPCR value.
Thus we add the LPCR to the "info registers" output if the register had
been defined.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xststdcsp: VSX Scalar Test Data Class Single-Precision
xststdcdp: VSX Scalar Test Data Class Double-Precision
xststdcqp: VSX Scalar Test Data Class Quad-Precision
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xvtstdcsp: VSX Vector Test Data Class Single-Precision
xvtstdcdp: VSX Vector Test Data Class Double-Precision
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Macro calls without a trailing ; look weird in C, this works as a side
effect of how QEMU_BUILD_BUG_ON is implemented. Fix this up.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
stub version of MISMATCH_CHECK is empty so it's easy to misuse for
people not building kvm on arm. Use QEMU_BUILD_BUG_ON similar to the
non-stub version to make it easier to catch bugs.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
There is no CPU model called "7447_v1.2" in our list, so the
"7447" alias should point to "7447_v1.1" instead. Let's also
remove the "codename" aliases that point to non-implemented
CPU models - they are really of no use here.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We do not support POWER1 CPUs in QEMU, so it does not make sense
to keep this stub around.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This is a port to ppc of the i386 commit:
00f4d64 kvmclock: clock should count only if vm is running
We remove timebase_post_load function, and use the VM state
change handler to save and restore the guest_timebase (on stop
and continue).
We keep timebase_pre_save to reduce the clock difference on
migration like in:
6053a86 kvmclock: reduce kvmclock difference on migration
Time base offset has originally been introduced by commit
98a8b52 spapr: Add support for time base offset migration
So while VM is paused, the time is stopped. This allows to have
the same result with date (based on Time Base Register) and
hwclock (based on "get-time-of-day" RTAS call).
Moreover in TCG mode, the Time Base is always paused, so this
patch also adjust the behavior between TCG and KVM.
VM state field "time_of_the_day_ns" is now useless but we keep
it to be able to migrate to older version of the machine.
As vmstate_ppc_timebase structure (with timebase_pre_save() and
timebase_post_load() functions) was only used by vmstate_spapr,
we register the VM state change handler only in ppc_spapr_init().
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
pcr_supported is used to define the supported PCR values for a given
processor. A POWER9 processor can support 3.00, 2.07, 2.06 and 2.05
compatibility modes, thus we set this accordingly.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This logical PVR value now corresponds to ISA version 3.00 so rename it
accordingly.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xvcvhpsp: VSX Vector Convert Half Precision to Single Precision
xvcvsphp: VSX Vector Convert Single Precision to Half Precision
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xscvsdqp: VSX Scalar Convert Signed Doubleword format to
Quad-Precision format
xscvudqp: VSX Scalar Convert Unsigned Doubleword format to
Quad-Precision format
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xscmpoqp, xscmpuqp & xscmpexpqp were added before f128 field was
introduced in ppc_vsr_t. Now that we have it, use it instead of
generating the 128 bit float using two 64bit fields.
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
bcdutrunc. Decimal unsigned truncate. Works like bcdtrunc. with
unsigned BCD numbers.
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
bcdtrunc.: Decimal integer truncate. Given a BCD number in vrb and the
number of bytes to truncate in vra, the return register will have vrb
with such bits truncated.
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xscvqpsdz: VSX Scalar truncate & Convert Quad-Precision format to
Signed Doubleword format
xscvqpswz: VSX Scalar truncate & Convert Quad-Precision format to
Signed Word format
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
bcdsr.: Decimal shift and round. This instruction works like bcds.
however, when performing right shift, 1 will be added to the
result if the last digit was >= 5.
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
bcdus.: Decimal unsigned shift. This instruction works like bcds. but
considers only unsigned BCDs (no sign in least meaning 4 bits).
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
bcds.: Decimal shift. Given two registers vra and vrb, this instruction
shift the vrb value by vra bits into the result register.
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This commit fixes a warning in the code "(i * 2) ? .. : ..", which
should be better as "i ? .. : ..", and improves the BCD_DIG_BYTE
macro by placing parentheses around its argument to avoid possible
expansion issues like: BCD_DIG_BYTE(i + j).
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xscvqpdp: VSX Scalar round & Convert Quad-Precision format to
Double-Precision format
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xscvdpqp: VSX Scalar Convert Double-Precision format to
Quad-Precision format
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Once a compatiblity mode is negotiated with the guest,
h_client_architecture_support() uses run_on_cpu() to update each CPU to
the new mode. We're going to want this logic somewhere else shortly,
so make a helper function to do this global update.
We put it in target-ppc/compat.c - it makes as much sense at the CPU level
as it does at the machine level. We also move the cpu_synchronize_state()
into ppc_set_compat(), since it doesn't really make any sense to call that
without synchronizing state.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Use correct FP precision when setting FPRF in FP conversion helpers
instead of always assuming float64 precision.
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xscvdphp: VSX Scalar round & Convert Double-Precision format to
Half-Precision format
xscvhpdp: VSX Scalar Convert Half-Precision format to
Double-Precision format
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Since helper_compute_fprf() works on float64 argument, rename it
to helper_compute_fprf_float64(). Also use a macro to generate
helper_compute_fprf_float64() so that float128 version of the same
helper can be introduced easily later.
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Replace isden() by float64_is_zero_or_denormal() so that code in
helper_compute_fprf() can be reused to work with float128 argument.
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Use float64 argument instead of unit64_t in helper_compute_fprf()
This allows code in helper_compute_fprf() to be reused later to
work with float128 argument too.
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xxinsertw: VSX Vector Insert Word
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xxextractuw: VSX Vector Extract Unsigned Word
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Current ppc_set_compat() will attempt to set any compatiblity mode
specified, regardless of whether it's available on the CPU. The caller is
expected to make sure it is setting a possible mode, which is awkwward
because most of the information to make that decision is at the CPU level.
This begins to clean this up by introducing a ppc_check_compat() function
which will determine if a given compatiblity mode is supported on a CPU
(and also whether it lies within specified minimum and maximum compat
levels, which will be useful later). It also contains an assertion that
the CPU has a "virtual hypervisor"[1], that is, that the guest isn't
permitted to execute hypervisor privilege code. Without that, the guest
would own the PCR and so could override any mode set here. Only machine
types which use a virtual hypervisor (i.e. 'pseries') should use
ppc_check_compat().
ppc_set_compat() is modified to validate the compatibility mode it is given
and fail if it's not available on this CPU.
[1] Or user-only mode, which also obviously doesn't allow access to the
hypervisor privileged PCR. We don't use that now, but could in future.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
To continue consolidation of compatibility mode information, this rewrites
the ppc_get_compat_smt_threads() function using the table of compatiblity
modes in target-ppc/compat.c.
It's not a direct replacement, the new ppc_compat_max_threads() function
has simpler semantics - it just returns the number of threads the cpu
model has, taking into account any compatiblity mode it is in.
This no longer takes into account kvmppc_smt_threads() as the previous
version did. That check wasn't useful because we check in
ppc_cpu_realizefn() that CPUs aren't instantiated with more threads
than kvm allows (or if we didn't things will already be broken and
this won't make it any worse).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
This rewrites the ppc_set_compat() function so that instead of open coding
the various compatibility modes, it reads the relevant data from a table.
This is a first step in consolidating the information on compatibility
modes scattered across the code into a single place.
It also makes one change to the logic. The old code masked the bits
to be set in the PCR (Processor Compatibility Register) by which bits
are valid on the host CPU. This made no sense, since it was done
regardless of whether our guest CPU was the same as the host CPU or
not. Furthermore, the actual PCR bits are only relevant for TCG[1] -
KVM instead uses the compatibility mode we tell it in
kvmppc_set_compat(). When using TCG host cpu information usually
isn't even present.
While we're at it, we put the new implementation in a new file to make the
enormous translate_init.c a little smaller.
[1] Actually it doesn't even do anything in TCG, but it will if / when we
get to implementing compatibility mode logic at that level.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
stxvll: Store VSX Vector Left-justified with Length
Vector (8-bit elements) in BE/LE:
+---+---+---+---+---+---+---+---+---+---+---+---+---+---+--+--+
|“T”|“h”|“i”|“s”|“ ”|“i”|“s”|“ ”|“a”|“ ”|“T”|“E”|“S”|“T”|00|00|
+---+---+---+---+---+---+---+---+---+---+---+---+---+---+--+--+
Storing 14 bytes would result in following Little/Big-endian Storage:
+---+---+---+---+---+---+---+---+---+---+---+---+---+---+--+--+
|“T”|“h”|“i”|“s”|“ ”|“i”|“s”|“ ”|“a”|“ ”|“T”|“E”|“S”|“T”|FF|FF|
+---+---+---+---+---+---+---+---+---+---+---+---+---+---+--+--+
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
A function to check if all digits of a given BCD number is valid is
here presented because more instructions will need to reuse the
same code.
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The structure and corresponding defines and functions need to be used
outside of fpu_helper.c as well.
Add u8, u16, u32 and Int128 to the structure.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The 'cpu_version' field in PowerPCCPU is badly named. It's named after the
'cpu-version' device tree property where it is advertised, but that meaning
may not be obvious in most places it appears.
Worse, it doesn't even really correspond to that device tree property. The
property contains either the processor's PVR, or, if the CPU is running in
a compatibility mode, a special "logical PVR" representing which mode.
Rename the cpu_version field, and a number of related variables to
compat_pvr to make this clearer.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Thomas Huth <thuth@redhat.com>
The pseries machine type is a bit unusual in that it runs a paravirtualized
guest. The guest expects to interact with a hypervisor, and qemu
emulates the functions of that hypervisor directly, rather than executing
hypervisor code within the emulated system.
To implement this in TCG, we need to intercept hypercall instructions and
direct them to the machine's hypercall handlers, rather than attempting to
perform a privilege change within TCG. This is controlled by a global
hook - cpu_ppc_hypercall.
This cleanup makes the handling a little cleaner and more extensible than
a single global variable. Instead, each CPU to have hypercalls intercepted
has a pointer set to a QOM object implementing a new virtual hypervisor
interface. A method in that interface is called by TCG when it sees a
hypercall instruction. It's possible we may want to add other methods in
future.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
bcdsetsgn.: Decimal set sign. This instruction copies the register
value to the result register but adjust the signal according to
the preferred sign value.
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
bcdcpsgn.: Decimal copy sign. Given two registers vra and vrb, it
copies the vra value with vrb sign to the result register vrt.
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
bcdctsq.: Decimal convert to signed quadword. It is possible to
convert packed decimal values to signed quadwords.
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
bcdcfsq.: Decimal convert from signed quadword. It is not possible
to convert values less than -10^31-1 or greater than 10^31-1 to be
represented in packed decimal format.
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
[dwg: Corrected constant which should be 10^16-1 but was 10^17-1]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
stxsd: Store VSX Scalar Dword
stxssp: Store VSX Scalar SP
Moreover, DQ-Form/DS-FORM instructions shares the same primary
opcode(0x3D). For DQ-FORM bits 29:31 are used, for DS-FORM bits 30:31
are used. Common routine to decode primary opcode(0x3D) -
ds-form/dq-form instructions is required.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
lxsd: Load VSX Scalar Dword
lxssp: Load VSX Scalar Single
Moreover, DS-Form instructions shares the same primary opcode, bits
30:31 are used to decode the instruction. Use a common routine to decode
primary opcode(0x39) - ds-form instructions and branch-out depending on
bits 30:31.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
- xscmpodp & xscmpudp are missing flags reset.
- In xscmpodp, VXCC should be set only if VE is 0 for signalling NaN case
and VXCC should be set by explicitly checking for quiet NaN case.
- Comparison is being done only if the operands are not NaNs. However as
per ISA, it should be done even when operands are NaNs.
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Add _BIT to CRF_[GT,LT,EQ_SO] and introduce CRF_[GT,LT,EQ,SO] for usage
without shifts in the code. This would simplify the code.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Move instruction decode helpers to target-ppc/internal.h so that some
of these can be used from outside of translate.c. This movement also
helps to get rid of some duplicate helpers from target-ppc/fpu_helper.c.
Suggested-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Phil Dennis-Jordan <phil@philjordan.eu>
Message-Id: <1484921496-11257-4-git-send-email-phil@philjordan.eu>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This fixes timekeeping of x86-64 Darwin/OS X/macOS guests when using KVM.
Darwin/OS X/macOS for x86-64 uses the TSC for timekeeping; it normally calibrates this by querying various clock frequency scaling MSRs. Details depend on the exact CPU model detected. The local APIC timer frequency is extracted from (EFI) firmware.
This is problematic in the presence of virtualisation, as the MSRs in question are typically not handled by the hypervisor. VMWare (Fusion) advertises TSC and APIC frequency via a custom 0x40000010 CPUID leaf, in the eax and ebx registers respectively. This is documented at https://lwn.net/Articles/301888/ among other places.
Darwin/OS X/macOS looks for the generic 0x40000000 hypervisor leaf, and if this indicates via eax that leaf 0x40000010 might be available, that is in turn queried for the two frequencies.
This adds a CPU option "vmware-cpuid-freq" to enable the same behaviour when running Qemu with KVM acceleration, if the KVM TSC frequency can be determined, and it is stable. (invtsc or user-specified) The virtualised APIC bus cycle is hardcoded to 1GHz in KVM, so ebx of the CPUID leaf is also hardcoded to this value.
Signed-off-by: Phil Dennis-Jordan <phil@philjordan.eu>
Message-Id: <1484921496-11257-2-git-send-email-phil@philjordan.eu>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch improves interrupt handling in record/replay mode.
Now "interrupt" event is saved only when cc->cpu_exec_interrupt returns true.
This patch also adds missing return to cpu_exec_interrupt function.
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Message-Id: <20170124071708.4572.64023.stgit@PASHA-ISP>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
For M profile (unlike A profile) the reset value of R14 is specified
as 0xffffffff. (The rationale is that this is an illegal exception
return value, so if guest code tries to return to it it will result
in a helpful exception.)
Registers r0 to r12 and the flags are architecturally UNKNOWN on
reset, so we leave those at zero.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1485285380-10565-11-git-send-email-peter.maydell@linaro.org
For M profile CPUs, FAULTMASK should be 0 on reset, like PRIMASK.
QEMU stores FAULTMASK in the PSTATE F bit, so (as with PRIMASK in the
I bit) we have to clear these to undo the A profile default of 1.
Update the comment accordingly and move it so that it's closer to the
code it's referring to.
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1485285380-10565-10-git-send-email-peter.maydell@linaro.org
[PMM: rewrote commit message, moved comments]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For v7M attempts to access a nonexistent coprocessor are reported
differently from plain undefined instructions (as UsageFaults of type
NOCP rather than type UNDEFINSTR). Split them out into a new
EXCP_NOCP so we can report the FSR value correctly.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1485285380-10565-8-git-send-email-peter.maydell@linaro.org
When we take an exception for an undefined instruction, set the
appropriate CFSR bit.
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1485285380-10565-7-git-send-email-peter.maydell@linaro.org
[PMM: tweaked commit message, comment]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The CCR.STACKALIGN bit controls whether the CPU is supposed to force
8-alignment of the stack pointer on entry to the exception handler.
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
Message-id: 1485285380-10565-6-git-send-email-peter.maydell@linaro.org
[PMM: commit message and comment tweaks]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add the structure fields, VMState fields, reset code and macros for
the v7M system control registers CCR, CFSR, HFSR, DFSR, MMFAR and
BFAR.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1485285380-10565-4-git-send-email-peter.maydell@linaro.org
We only use the IS_M() macro in two places, and it's a bit of a
namespace grab to put in cpu.h. Drop it in favour of just explicitly
calling arm_feature() in the places where it was used.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1485285380-10565-2-git-send-email-peter.maydell@linaro.org
FAULTMASK must be cleared on return from all
exceptions other than NMI.
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1484937883-1068-7-git-send-email-peter.maydell@linaro.org
The v7m CONTROL register bit 1 is SPSEL, which indicates
the stack being used. We were storing this information
not in v7m.control but in the separate v7m.other_sp
structure field. Unfortunately, the code handling reads
of the CONTROL register didn't take account of this, and
so if SPSEL was updated by an exception entry or exit then
a subsequent guest read of CONTROL would get the wrong value.
Using a separate structure field doesn't really gain us
anything in efficiency, so drop this unnecessary complexity
in favour of simply storing all the bits in v7m.control.
This is a migration compatibility break for M profile
CPUs only.
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1484937883-1068-6-git-send-email-peter.maydell@linaro.org
[PMM: rewrote commit message;
use deposit32(); use FIELD to define constants for
masking and shifting of CONTROL register fields
]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Give an explicit error and abort when a load
from the vector table fails. Architecturally this
should HardFault (which will then immediately
fail to load the HardFault vector and go into Lockup).
Since we don't model Lockup, just report this guest
error via cpu_abort(). This is more helpful than the
previous behaviour of reading a zero, which is the
address of the reset stack pointer and not a sensible
location to jump to.
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1484937883-1068-4-git-send-email-peter.maydell@linaro.org
[PMM: expanded commit message]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For v7m we need to catch attempts to execute from special
addresses at 0xfffffff0 and above. Previously we did this
with the aid of a hacky special purpose lump of memory
in the address space and a check in translate.c for whether
we were translating code at those addresses.
We can implement this more cleanly using a CPU
unassigned access handler which throws the exception
if the unassigned access is for one of the special addresses.
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1484937883-1068-3-git-send-email-peter.maydell@linaro.org
[PMM:
* drop the deletion of the "don't interrupt if PC is magic"
code in arm_v7m_cpu_exec_interrupt() -- this is still
required
* don't generate an exception for unassigned accesses
which aren't to the magic address -- although doing
this is in theory correct in practice it will break
currently working guests which rely on the RAZ/WI
behaviour when they touch devices which we haven't
modelled.
* trigger EXCP_EXCEPTION_EXIT on is_exec, not !is_write
]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The MRS and MSR instruction handling has a number of flaws:
* unprivileged accesses should only be able to read
CONTROL and the xPSR subfields, and only write APSR
(others RAZ/WI)
* privileged access should not be able to write xPSR
subfields other than APSR
* accesses to unimplemented registers should log as
guest errors, not abort QEMU
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1484937883-1068-2-git-send-email-peter.maydell@linaro.org
[PMM: rewrote commit message]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add support for emulating Altera NiosII R1 architecture into qemu.
This patch is based on previous work by Chris Wulff from 2012 and
updated to latest mainline QEMU.
Signed-off-by: Marek Vasut <marex@denx.de>
Cc: Chris Wulff <crwulff@gmail.com>
Cc: Jeff Da Silva <jdasilva@altera.com>
Cc: Ley Foon Tan <lftan@altera.com>
Cc: Sandra Loosemore <sandra@codesourcery.com>
Cc: Yves Vandervennet <yvanderv@altera.com>
Cc: Alexander Graf <agraf@suse.de>
Message-Id: <20170118220146.489-3-marex@denx.de>
[rth: Remove tlb_flush from nios2_cpu_reset.]
Signed-off-by: Richard Henderson <rth@twiddle.net>
1 My maintainer change
2 Jianjun's qtailq
3 Ashijeet's only-migratable
4 Zhanghailiang's re-active images
5 Pankaj's change name of migration thread
6 My PCI migration merge
7 Juan's debug to tracing
8 My tracing on save
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJYh59nAAoJEAUWMx68W/3nC90P/2sNW0bjFBF7mACpgylOyakM
LwV6zYemzSPFl2MWv5MlJYOiKm6JA8azv3jbm2um518wBqVnMtJdIacBE0FGzPOu
ZxaRdawEQbNbOB8K3nLSVhrdSvSHenSbGP6LeKb1audHT/vkFCBQt4BH30eGG8Tn
/ugU/UpxKOJX65kTqyVc+8GCVbSfKXsp21fQgYvj7kjOwQ56/IuvnsQ8UTZrxeqZ
My27vFv4iNdonxj6YR+GDBs5Cei2vKi9lVdXu7VzruJDMmtn3Qyros6KeDDJS6bu
CmI4yn557zn+SfqJkckltBg27343JyYPDxP40mNvsSUY1JCWJsP47o+UnHPW3kFc
uinuHGmz/NnB7OUS9GkYDco4eU453pvaUFSVb6STdzIHtGByZwZVOcgeP+RNFkUx
pF8TxALaS6MfEiWO9Rn+vZEyEGBZAWaH3QXk1zd5ylXyb5W6gifHMsMYiln14FpA
0PWrItDjt45P/1C/lS+MLsVGOQEn5xMTM2zegQOYOIQHuuW0M6xhwX5LPJPM7+XI
6wG1iy1lxoNx5/Afm+NmzTLYSo8dEUeLq6jjUsoiU5m8tdg1QVpq2A37IWKMA7aH
2ZvN5IX/oPDPDxt0fXzzSjRsixGfqfYHf1ghEBX0QDyVdgyTX5+MX3NKMTRVXtBX
xTQ43oKdxYdTGmOwnhVc
=jetb
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgilbert/tags/pull-migration-20170124b' into staging
Migration
1 My maintainer change
2 Jianjun's qtailq
3 Ashijeet's only-migratable
4 Zhanghailiang's re-active images
5 Pankaj's change name of migration thread
6 My PCI migration merge
7 Juan's debug to tracing
8 My tracing on save
# gpg: Signature made Tue 24 Jan 2017 18:39:35 GMT
# gpg: using RSA key 0x0516331EBC5BFDE7
# gpg: Good signature from "Dr. David Alan Gilbert (RH2) <dgilbert@redhat.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 45F5 C71B 4A0C B7FB 977A 9FA9 0516 331E BC5B FDE7
* remotes/dgilbert/tags/pull-migration-20170124b:
migration/tracing: Add tracing on save
migration: transform remaining DPRINTF into trace_
PCI/migration merge vmstate_pci_device and vmstate_pcie_device
migration: Change name of live migration thread
migration: re-active images while migration been canceled after inactive them
migration: Fail migration blocker for --only-migratable
migration: disallow migrate_add_blocker during migration
migration: Allow "device add" options to only add migratable devices
migration: Add a new option to enable only-migratable
block/vvfat: Remove the undesirable comment
migration: add error_report
tests/migration: Add test for QTAILQ migration
migration: migrate QTAILQ
migration: extend VMStateInfo
MAINTAINERS: Add myself as a migration submaintainer
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
If a migration is already in progress and somebody attempts
to add a migration blocker, this should rightly fail.
Add an errp parameter and a retcode return value to migrate_add_blocker.
Signed-off-by: John Snow <jsnow@redhat.com>
Signed-off-by: Ashijeet Acharya <ashijeetacharya@gmail.com>
Message-Id: <1484566314-3987-5-git-send-email-ashijeetacharya@gmail.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Acked-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Merged with recent 'Allow invtsc migration' change
Current migration code cannot handle some data structures such as
QTAILQ in qemu/queue.h. Here we extend the signatures of put/get
in VMStateInfo so that customized handling is supported. put now
will return int type.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Jianjun Duan <duanj@linux.vnet.ibm.com>
Message-Id: <1484852453-12728-2-git-send-email-duanj@linux.vnet.ibm.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
that might cause random guest crashes with zeroed out pages on host
kernels with working cmma (< 4.6 and likely >= 4.10).
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJYh2wiAAoJEN7Pa5PG8C+vYj4QAKfZWc6Pf42HQUfDVdgiK3cV
8N7Ew9VHCaXO7awf9wcAgjGAX7BRPbVMB/QEpta2KmtKftxUGsfVACOAM8cZmoqj
ItZ1bpR5/tbNMlCPEyoqkJvhyDKzL5fea0wucss224cDlV7n/AyZjei9QnzMirtZ
rEVDbnM/BmvpGiwSmrSzXOwFTY8hOd738bm0gIVnKW8GxslChYwVrpEtrgdqL7yG
dSRruE2h2VUC8yplre9smJk3sg5xUsIxWa4JgI3s84O++pEnB02Yi+OIqW+zG9xJ
ABObWMls5dbqap1T2VaF3fdt/yVUuZvOl8gB3Op5m6ULSyd3m+KJdbR4XvYKpDGe
ykJNcex+W8mlejFfo2jDLVYHK9e4PXfwtBGpogSzQj1d1+jLlAl1HhTd2v7NLQtL
hSDTUKlRG5XmtbQ6Fm4FBDC7tdO9CmGrhSeLSZ9fJM29Hn5PMc4AERBGWEMph/ek
j15sGUu6vqiBXAuwH17TpKrlQe8I03JqWMscvQ1mSLZSB4DwhqXl5zPqRdGATGuU
aN/0FptfqXcBOC8/3EfK6yJXlAuCqbBbbDqqn7kN2OcYVblAUmhUgU1DkDeCRZi8
d4d9IRBIKmVTBpO/7CPEwupLMyEGEQZcIqKpTPax7lCCgRCTAfpVRddVibKMvOa5
aDl0EyPupUUr7/ubw4AD
=Rmjk
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/cohuck/tags/s390x-20170124' into staging
Two s390x fixes: One for the kvm.c build failure, and one for a bug
that might cause random guest crashes with zeroed out pages on host
kernels with working cmma (< 4.6 and likely >= 4.10).
# gpg: Signature made Tue 24 Jan 2017 15:00:50 GMT
# gpg: using RSA key 0xDECF6B93C6F02FAF
# gpg: Good signature from "Cornelia Huck <huckc@linux.vnet.ibm.com>"
# gpg: aka "Cornelia Huck <cornelia.huck@de.ibm.com>"
# Primary key fingerprint: C3D0 D66D C362 4FF6 A8C0 18CE DECF 6B93 C6F0 2FAF
* remotes/cohuck/tags/s390x-20170124:
s390x/kvm: fix cmma reset for KVM
s390x/kvm: include hw_accel.h instead of kvm.h
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
-----BEGIN PGP SIGNATURE-----
iQIcBAABCAAGBQJYhpFDAAoJECgHk2+YTcWma8gP/3FI1oOSw4Z2emrrMJifi7i7
FfErjzKQX76wze8MbDIb0Biyij7/1qSz9Z4yD7+tURF49O6Qd4MpryFxgfuQx7Xo
ahZhf+PedhBYt62RaIpFYER30mqg0JzbyLFEH/Vk8FVxOrhn2zB+MUGU6zU2Z+dE
AAbryuruPL/sdxSDerTscnOJTxRZEs/2zIdf7aqhuqCe9P9w/lSq1mPWckBgtNJx
V36HXm5q0nvEyBJuu3ikYD8EyQrF4nAsa+Xe0E9H2XZtA/rQX9uBasqVRS/vgS9T
fsdHIWqJ7jS0NhkMgQJTNy/NjGEAm6xdqSHoRCqLv9qHTN1yD9C8r/N5+fOqf6jh
wDlTVns8Se9yMrxTWt0RN3Rf64Zc2X3O1iroe596vR71AQvyXxwxN0aNEo5uPUip
nCM6K5OJBk+vDOA93aZbKGHBWJp3gOJdAEUbgScXZvOGQa7MF+MaK8aZK6Ad9zV2
MYxAn8RYZP/UQIzIJWeVJMC0/F/ugmdrnURltf88gqeG17x4eBw0et6PenKf779X
vHBXok5uPMQbcV3T7HH79J4hMi2Qn8wV7G9j3CzMMT7qqYVITbq7R28FUOJIuUDm
64jL/IXRfFDsuV72PctDkmu7UPDSioJUgsGTGTjH/6G+eUvq8utu8hWEkYnGWE4W
thYPUaE50CCUCF6PiW6/
=abpK
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/ehabkost/tags/x86-and-machine-pull-request' into staging
x86, machine, numa queue (2017-01-23)
# gpg: Signature made Mon 23 Jan 2017 23:26:59 GMT
# gpg: using RSA key 0x2807936F984DC5A6
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>"
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6
* remotes/ehabkost/tags/x86-and-machine-pull-request:
kvm: Allow invtsc migration if tsc-khz is set explicitly
kvm: Simplify invtsc check
hw/core/null-machine: Add the possibility to instantiate a CPU and RAM
qemu-options: Rename variables on the -numa "cpus" option
MAINTAINERS: Add an entry for hw/core/null-machine.c
machine: Make possible_cpu_arch_ids() return const pointer
pc: don't return cpu pointer from pc_new_cpu() as it's not needed anymore
pc: cleanup: move smbios_set_cpuid() into pc_build_smbios()
arch_init: Remove unnecessary default_config_files table
vl: Ensure the numa_post_machine_init func in the appropriate location
i386: Return migration-safe field on query-cpu-definitions
i386: Remove AMD feature flag aliases from Opteron models
x86: add AVX512_VPOPCNTDQ features
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We must reset the CMMA states for normal memory (when not on mem path),
but the current code does the opposite. This was unnoticed for some time
as the kernel since 4.6 also had a bug which mostly disabled the paging
optimizations.
Fixes: 07059effd1 ("s390x/kvm: let the CPU model control CMM(A)")
Cc: qemu-stable@nongnu.org # v2.8
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Commit b394662 ("kvm: move cpu synchronization code") switched
to hw_accel.h instead of kvm.h, but missed s390x, resulting in
CC s390x-softmmu/target/s390x/kvm.o
/home/cohuck/git/qemu/target/s390x/kvm.c: In function ‘kvm_sclp_service_call’:
/home/cohuck/git/qemu/target/s390x/kvm.c:1034:5: error: implicit declaration of function ‘cpu_synchronize_state’ [-Werror=implicit-function-declaration]
cpu_synchronize_state(CPU(cpu));
^
/home/cohuck/git/qemu/target/s390x/kvm.c:1034:5: error: nested extern declaration of ‘cpu_synchronize_state’ [-Werror=nested-externs]
/home/cohuck/git/qemu/target/s390x/kvm.c: In function ‘sigp_initial_cpu_reset’:
/home/cohuck/git/qemu/target/s390x/kvm.c:1628:5: error: implicit declaration of function ‘cpu_synchronize_post_reset’ [-Werror=implicit-function-declaration]
cpu_synchronize_post_reset(cs);
^
/home/cohuck/git/qemu/target/s390x/kvm.c:1628:5: error: nested extern declaration of ‘cpu_synchronize_post_reset’ [-Werror=nested-externs]
/home/cohuck/git/qemu/target/s390x/kvm.c: In function ‘sigp_set_prefix’:
/home/cohuck/git/qemu/target/s390x/kvm.c:1665:5: error: implicit declaration of function ‘cpu_synchronize_post_init’ [-Werror=implicit-function-declaration]
cpu_synchronize_post_init(cs);
^
/home/cohuck/git/qemu/target/s390x/kvm.c:1665:5: error: nested extern declaration of ‘cpu_synchronize_post_init’ [-Werror=nested-externs]
cc1: all warnings being treated as errors
/home/cohuck/git/qemu/rules.mak:64: recipe for target 'target/s390x/kvm.o' failed
Fix this.
Fixes: b394662 ("kvm: move cpu synchronization code")
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Acked-by: Vincent Palatin <vpalatin@chromium.org>
We can safely allow a VM to be migrated with invtsc enabled if
tsc-khz is set explicitly, because:
* QEMU already refuses to start if it can't set the TSC frequency
to the configured value.
* Management software is already required to keep device
configuration (including CPU configuration) the same on
migration source and destination.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20170108173234.25721-3-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Instead of searching the table we have just built, we can check
the env->features field directly.
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20170108173234.25721-2-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Return the migration-safe field on query-cpu-definitions. All CPU
models in x86 are migration-safe except "host".
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20170116181212.31565-1-ehabkost@redhat.com>
Acked-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
When CPU vendor is set to AMD, the AMD feature alias bits on
CPUID[0x80000001].EDX are already automatically copied from CPUID[1].EDX
on x86_cpu_realizefn(). When CPU vendor is Intel, those bits are
reserved and should be zero. On either case, those bits shouldn't be set
in the CPU model table.
Commit 726a8ff686 removed those
bits from most CPU models, but the Opteron_* entries still have
them. Remove the alias bits from Opteron_* too.
Add an assert() to x86_register_cpudef_type() to ensure we don't
make the same mistake again.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20170113190057.6327-1-ehabkost@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
For linux, page 0 is mapped as an execute-only gateway. A gateway
page is a special bit in the page table that allows a B,GATE insn
within that page to raise processor permissions. This is how system
calls are implemented for HPPA.
Rather than actually map anything here, or handle permissions at all,
implement the semantics of the actual linux syscall entry points.
Signed-off-by: Richard Henderson <rth@twiddle.net>
The HPPA cpu has a unique form of predicated execution in which
almost any instruction can set the PSW[N] (or "nullify") bit,
which suppresses execution (and even decoding) of the following
instruction. Execution of a nullified insn clears the PSW[N] bit.
This adds a generic framework for branching over nullified insns,
or for sufficiently simple insns, transforming the writeback of
the result to a conditional move. In the process, we want to be
able to represent PSW[N] as a TCG condition, which implies management
of the related tcg temps.
Signed-off-by: Richard Henderson <rth@twiddle.net>
This is just about the minimum required to enable compilation
without actually executing any instructions. This contains the
HPPACPU structure and the required callbacks, the gdbstub, the
basic translation loop, and a translate_one function that always
results in an illegal instruction.
Signed-off-by: Richard Henderson <rth@twiddle.net>
- rework of the zpci code, giving us proper multibus support
- introduction of the 2.9 machine
- fixes and improvements
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJYgdReAAoJEN7Pa5PG8C+vDggP/i3eviyb2mFlnIiwazlAfBuw
Uc6vBFDh/WWMthpzHl4PF+yujM3XbuvUN3VejdnqWLQ1PYq2p3n7rHNlR2XlBovu
f8l2LpPZGsj1VtAr1QGBj5ipOmRs3qydXY7EDCKORbKuPeor1VW7TbeaKbfpvpZM
rZHWMlV1UGA6kxM/B+zd9+kxBM3IYnHy3o+Gaq+cfuKyc0VRWRJmalqonjkR7EZj
InaIyOtGonpPTlMD1GTbM71Wx/NnCugYUEX1Eq4yHX4DV15rM3B83LgTJu72txzr
ObJmzT3XU2DKwtzo87Y6cWJ3GoxQQbwgiU6VL+l8JVtrzGfllpUdcdInQjSqxXp2
OW8NuV6Ie02YOrczBXbBAv46PKmoLTf63hvsC4f6nNLa2O6FqxAXzYGKtOpvgOq5
j1Q6VyzAb/vbyyW2lyMice4XJXGMxitaMGxvJG0lq/iscRpNdpz6E+dgkzO7lieF
+ETpDsGd5miMdsAUqmIREjBCCjOzOGpC4WX0mg8Te8LmR3Rt8WYIgWuowMvbq2iG
/qmv9a8ea2XqB+/g2ta+YqS9cPChsPJSN03Q0bo1244DMwBKuVwyXNsC9lRIkiHJ
4b1Msoseohv9D4ghU8q6gSOU+T5nxLRT1TWBByqhkONU1C4UyKHEblop/c1oHE5k
UZtiaQvyWFhVU4QtXeE8
=fzmu
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/cohuck/tags/s390x-20170120-v2' into staging
First set of s390x patches for 2.9:
- rework of the zpci code, giving us proper multibus support
- introduction of the 2.9 machine
- fixes and improvements
# gpg: Signature made Fri 20 Jan 2017 09:11:58 GMT
# gpg: using RSA key 0xDECF6B93C6F02FAF
# gpg: Good signature from "Cornelia Huck <huckc@linux.vnet.ibm.com>"
# gpg: aka "Cornelia Huck <cornelia.huck@de.ibm.com>"
# Primary key fingerprint: C3D0 D66D C362 4FF6 A8C0 18CE DECF 6B93 C6F0 2FAF
* remotes/cohuck/tags/s390x-20170120-v2:
virtio-ccw: fix ring sizing
s390x/pci: merge msix init functions
s390x/pci: handle PCIBridge bus number
s390x/pci: use hashtable to look up zpci via fh
s390x/pci: PCI multibus bridge handling
s390x/pci: optimize calling s390_get_phb()
s390x/pci: change the device array to a list
s390x/pci: dynamically allocate iommu
s390x/pci: make S390PCIIOMMU inherit Object
s390x/kvm: use kvm_gsi_routing_enabled in flic
s390x: add compat machine for 2.9
s390x: remove double compat statement
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Enable the ARM_FEATURE_EL2 bit on Cortex-A52 and
Cortex-A57, since this is all now sufficiently implemented
to work with the GICv3. We provide the usual CPU property
to disable it for backwards compatibility with the older
virt boards.
In this commit, we disable the EL2 feature on the
virt and ZynpMP boards, so there is no overall effect.
Another commit will expose a board-level property to
allow the user to enable EL2.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Message-id: 1483977924-14522-18-git-send-email-peter.maydell@linaro.org
The PSCI spec states that a CPU_ON call should cause the new
CPU to be started in the highest implemented Non-secure
exception level. We were incorrectly starting it at the
exception level of the caller, which happens to be correct
if EL2 is not implemented. Implement the correct logic
as described in the PSCI 1.0 spec section 6.4:
* if EL2 exists and SCR_EL3.HCE is set: start in EL2
* otherwise start in EL1
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Tested-by: Andrew Jones <drjones@redhat.com>
Message-id: 1483977924-14522-17-git-send-email-peter.maydell@linaro.org
Add fields to the ARMCPU structure to allow CPU classes to
specify the configurable aspects of their GIC CPU interface.
In particular, the virtualization support allows different
values for number of list registers, priority bits and
preemption bits.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Alistair Francis <alistair.francis@xilinx.com>
Message-id: 1483977924-14522-6-git-send-email-peter.maydell@linaro.org
The GICv3 support for virtualization includes an outbound
maintenance interrupt signal which is asserted when the
CPU interface wants to signal to the hypervisor that it
needs attention. Expose this as an outbound GPIO line from
the CPU object which can be wired up as a physical interrupt
line by the board code (as we do already for the CPU timers).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Message-id: 1483977924-14522-4-git-send-email-peter.maydell@linaro.org
The DBGVCR_EL2 system register is needed to run a 32-bit
EL1 guest under a Linux EL2 64-bit hypervisor. Its only
purpose is to provide AArch64 with access to the state of
the DBGVCR AArch32 register. Since we only have a dummy
DBGVCR, implement a corresponding dummy DBGVCR32_EL2.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
To run a VM in 32-bit EL1 our AArch32 interrupt handling code
needs to be able to cope with VIRQ and VFIQ exceptions.
These behave like IRQ and FIQ except that we don't need to try
to route them to Monitor mode.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
A function may recursively call device search functions or may call
serveral different device search function. Passing the S390pciState to
search functions as an argument instead of looking up it inside the
search functions lowers the number of calling s390_get_phb().
Signed-off-by: Yi Min Zhao <zyimin@linux.vnet.ibm.com>
Reviewed-by: Pierre Morel <pmorel@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Re-add the MacOSX/Darwin support:
Use the Intel HAX is kernel-based hardware acceleration module
(similar to KVM on Linux).
Based on the original "target/i386: Add Intel HAX to android emulator" patch
from David Chou <david.j.chou@intel.com> from emu-2.2-release branch in
the external/qemu-android repository.
Signed-off-by: Vincent Palatin <vpalatin@chromium.org>
Message-Id: <81b85c3032da902e73e77302af508b4b1a7c0ead.1484045952.git.vpalatin@chromium.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use the Intel HAX is kernel-based hardware acceleration module for
Windows (similar to KVM on Linux).
Based on the "target/i386: Add Intel HAX to android emulator" patch
from David Chou <david.j.chou@intel.com>
Signed-off-by: Vincent Palatin <vpalatin@chromium.org>
Message-Id: <7b9cae28a0c379ab459c7a8545c9a39762bd394f.1484045952.git.vpalatin@chromium.org>
[Drop hax_populate_ram stub. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
That's a forward port of the core HAX interface code from the
emu-2.2-release branch in the external/qemu-android repository as used by
the Android emulator.
The original commit was "target/i386: Add Intel HAX to android emulator"
saying:
"""
Backport of 2b3098ff27bab079caab9b46b58546b5036f5c0c
from studio-1.4-dev into emu-master-dev
Intel HAX (harware acceleration) will enhance android emulator performance
in Windows and Mac OS X in the systems powered by Intel processors with
"Intel Hardware Accelerated Execution Manager" package installed when
user runs android emulator with Intel target.
Signed-off-by: David Chou <david.j.chou@intel.com>
"""
It has been modified to build and run along with the current code base.
The formatting has been fixed to go through scripts/checkpatch.pl,
and the DPRINTF macros have been updated to get the instanciations checked by
the compiler.
The FPU registers saving/restoring has been updated to match the current
QEMU registers layout.
The implementation has been simplified by doing the following modifications:
- removing the code for supporting the hardware without Unrestricted Guest (UG)
mode (including all the code to fallback on TCG emulation).
- not including the Darwin support (which is not yet debugged/tested).
- simplifying the initialization by removing the leftovers from the Android
specific code, then trimming down the remaining logic.
- removing the unused MemoryListener callbacks.
Signed-off-by: Vincent Palatin <vpalatin@chromium.org>
Message-Id: <e1023837f8d0e4c470f6c4a3bf643971b2bca5be.1484045952.git.vpalatin@chromium.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Move the generic cpu_synchronize_ functions to the common hw_accel.h header,
in order to prepare for the addition of a second hardware accelerator.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Vincent Palatin <vpalatin@chromium.org>
Message-Id: <f5c3cffe8d520011df1c2e5437bb814989b48332.1484045952.git.vpalatin@chromium.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
These are not needed since linux-headers/ provides up-to-date definitions.
The constants are in linux-headers/asm-powerpc/kvm.h.
The sole users, hw/intc/xics_kvm.c and target/ppc/kvm.c, include asm/kvm.h
via sysemu/kvm.h->linux/kvm.h.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In OpenSPARC T1+ TWINX ASIs in store instructions are aliased
with Block Initializing Store ASIs.
"UltraSPARC T1 Supplement Draft D2.1, 14 May 2007" describes them
in the chapter "5.9 Block Initializing Store ASIs"
Integer stores of all sizes are allowed with these ASIs.
Signed-off-by: Artyom Tarasenko <atar4qemu@gmail.com>
According to chapter 13.3 of the
UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005,
only the sun4u format is available for data-access loads.
Store UA2005 entries in the sun4u format to simplify processing.
Signed-off-by: Artyom Tarasenko <atar4qemu@gmail.com>
Implement the behavior described in the chapter 13.9.11 of
UltraSPARC T1™ Supplement to the UltraSPARC Architecture 2005:
"If a TLB Data-In replacement is attempted with all TLB
entries locked and valid, the last TLB entry (entry 63) is
replaced."
Signed-off-by: Artyom Tarasenko <atar4qemu@gmail.com>
Please note that QEMU doesn't impelement Real->Physical address
translation. The "Real Address" is always the "Physical Address".
Suggested-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Artyom Tarasenko <atar4qemu@gmail.com>
Accordinf to UA2005, 9.3.3 "Address Space Identifiers",
"In hyperprivileged mode, all instruction fetches and loads and stores with implicit
ASIs use a physical address, regardless of the value of TL".
Signed-off-by: Artyom Tarasenko <atar4qemu@gmail.com>
As described in Chapter 5.7.6 of the UltraSPARC Architecture 2005,
outstanding disrupting exceptions that are destined for privileged mode can only
cause a trap when the virtual processor is in nonprivileged or privileged mode and
PSTATE.ie = 1. At all other times, they are held pending.
Signed-off-by: Artyom Tarasenko <atar4qemu@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Use explicit register pointers while accessing D/I-MMU registers.
Call cpu_unassigned_access on access to missing registers.
Signed-off-by: Artyom Tarasenko <atar4qemu@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
while IMMU/DMMU is disabled
- ignore MMU-faults in hypervisorv mode or if CPU doesn't have hypervisor
- signal TT_INSN_REAL_TRANSLATION_MISS/TT_DATA_REAL_TRANSLATION_MISS otherwise
Signed-off-by: Artyom Tarasenko <atar4qemu@gmail.com>
RER and WER are privileged instructions for accessing external
registers. External register address space is local to processor core.
There's no alignment requirements, addressable units are 32-bit wide
registers.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJYeOOmAAoJEPvQ2wlanipE3ZUH/Rsfpl23kXCMmqoXEIhWXy+h
yf8ARWCmpU6UKfwb+sH4vLegBfU56f62vVkGQ6oaaAbuyQ4SxCUlZGMO/rqY8/TE
m57aM+VfEE+bIdinAtLjFM24EVp/exMfkeutK7ItzLv7GwlrBos0J5veyCuyJ15q
pccV24jrpbJGilEeJ2GblKp3r2I3dInQGauOQhtoP3MNjHmYNSQD7noSbdN/JiTR
9H2eV700pg3ZPaSfO+CTVQN+cHjK1FC6qLi6916YZY9llnSOnDAegBYgbwE1RIBw
AULpWrezYveKy71eFhHVtGxnPeCJ8J4GVECMK0P0cdxzprIXFh1kZezyM4bxAGk=
=sboI
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/stsquad/tags/pull-tcg-common-tlb-reset-20170113-r1' into staging
This is the same as the v3 posted except a re-base and a few extra signoffs
# gpg: Signature made Fri 13 Jan 2017 14:26:46 GMT
# gpg: using RSA key 0xFBD0DB095A9E2A44
# gpg: Good signature from "Alex Bennée (Master Work Key) <alex.bennee@linaro.org>"
# Primary key fingerprint: 6685 AE99 E751 67BC AFC8 DF35 FBD0 DB09 5A9E 2A44
* remotes/stsquad/tags/pull-tcg-common-tlb-reset-20170113-r1:
cputlb: drop flush_global flag from tlb_flush
cpu_common_reset: wrap TCG specific code in tcg_enabled()
qom/cpu: move tlb_flush to cpu_common_reset
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the useless is_external argument. Since the iohandler
AioContext is never used for block devices, aio_disable_external
is never called on it. This lets us remove stubs/iohandler.c.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Configuration overlay does not explicitly say whether there are ICACHE
and DCACHE in the core. Current code uses XCHAL_[ID]CACHE_WAYS to detect
if corresponding cache option is enabled, but that's not correct: on
cores without cache these macros are defined as 1, not as 0.
Check XCHAL_[ID]CACHE_SIZE instead.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
There's no point in continuing translating guest instructions once an
unconditional exception is thrown.
There's also no point in updating pc before any instruction is
translated, don't do it.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Delimit each instruction that may access timers or IRQ state with
qemu_io_start/qemu_io_end, so that qemu-system-xtensa could be run with
-icount option.
Raise EXCP_YIELD after CCOMPARE reprogramming to let tcg_cpu_exec
recalculate how long this CPU is allowed to run.
RSR now may need to terminate TB, but it can't be done in RSR handler
because the same handler is used for XSR together with WSR handler, which
may also need to terminate TB. Change RSR and WSR handlers return type
to bool indicating whether TB termination is needed (RSR) or has been
done (WSR), and add TB termination after RSR/WSR dispatcher call.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Xtensa cores may have a register (CCOUNT) that counts core clock cycles.
It may also have a number of registers (CCOMPAREx); when CCOUNT value
passes the value of CCOMPAREx, timer interrupt x is raised.
Currently xtensa target counts a number of completed instructions and
assumes that for CCOUNT one instruction takes one cycle to complete.
It calls helper function to update CCOUNT register at every TB end and
raise timer interrupts. This scheme works very predictably and doesn't
have noticeable performance impact, but it is hard to use with multiple
synchronized processors, especially with coming MTTCG.
Derive CCOUNT from the virtual simulation time, QEMU_CLOCK_VIRTUAL.
Use native QEMU timers for CCOMPARE timers, one timer for each register.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
RUNSTALL signal stalls core execution while it's applied. It is widely
used in multicore configurations to control activity of additional
cores.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Xtensa cores may have two distinct addresses for the static vectors
group. Provide a function to select one of them.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
On 680x0 family only.
Address Register indirect With postincrement:
When using the stack pointer (A7) with byte size data, the register
is incremented by two.
Address Register indirect With predecrement:
When using the stack pointer (A7) with byte size data, the register
is decremented by two.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Thomas Huth <huth@tuxfamily.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <1484332593-16782-6-git-send-email-laurent@vivier.eu>
In these cases we must update the address register after
the operation.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <1484332593-16782-4-git-send-email-laurent@vivier.eu>
gen_flush_flags() is setting unconditionally cc_op_synced to 1
and s->cc_op to CC_OP_FLAGS, whereas env->cc_op can be set
to something else by a previous tcg fragment.
We fix that by not setting cc_op_synced to 1
(except for gen_helper_flush_flags() that updates env->cc_op)
FIX: https://github.com/vivier/qemu-m68k/issues/19
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <1484332593-16782-3-git-send-email-laurent@vivier.eu>
M680x0 bit operations with an immediate value use 9 bits of the 16bit
value, while coldfire ones use only 8 bits.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <1484332593-16782-2-git-send-email-laurent@vivier.eu>
We have never has the concept of global TLB entries which would avoid
the flush so we never actually use this flag. Drop it and make clear
that tlb_flush is the sledge-hammer it has always been.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
[DG: ppc portions]
Acked-by: David Gibson <david@gibson.dropbear.id.au>
It is a common thing amongst the various cpu reset functions want to
flush the SoftMMU's TLB entries. This is done either by calling
tlb_flush directly or by way of a general memset of the CPU
structure (sometimes both).
This moves the tlb_flush call to the common reset function and
additionally ensures it is only done for the CONFIG_SOFTMMU case and
when tcg is enabled.
In some target cases we add an empty end_of_reset_fields structure to the
target vCPU structure so have a clear end point for any memset which
is resetting value in the structure before CPU_COMMON (where the TLB
structures are).
While this is a nice clean-up in general it is also a precursor for
changes coming to cputlb for MTTCG where the clearing of entries
can't be done arbitrarily across vCPUs. Currently the cpu_reset
function is usually called from the context of another vCPU as the
architectural power up sequence is run. By using the cputlb API
functions we can ensure the right behaviour in the future.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
The new typename attribute on query-cpu-definitions will be used
to help management software use device-list-properties to check
which properties can be set using -cpu or -global for the CPU
model.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <1479320499-29818-1-git-send-email-ehabkost@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
In commit c52ab08aee,
the patch snippet for the "syscall" insn got applied to "iret".
Signed-off-by: Doug Evans <dje@google.com>
Message-Id: <f403045cde4049058c05446d5c04@google.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
If D[15] is != sign_ext(const4) then PC will be set to (PC +
zero_ext(disp4 + 16)).
[BK: fixed style errors]
Signed-off-by: Peer Adelt <peer.adelt@c-lab.de>
Message-Id: <1465314555-11501-5-git-send-email-peer.adelt@c-lab.de>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Puts the content of data register D[a] into E[c][63:32] and the
content of data register D[b] into E[c][31:0].
[BK: fix style error]
[BK: Allocate temporaries only when needed]
Signed-off-by: Peer Adelt <peer.adelt@c-lab.de>
Message-Id: <1465314555-11501-4-git-send-email-peer.adelt@c-lab.de>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Multiplies D[a] and D[b] and adds/subtracts the result to/from D[d].
The result is put in D[c]. All operands are floating-point numbers.
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Converts a 32-bit floating point number to an unsigned int. The
result is rounded towards zero.
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Use the new primitives for RDWINM and RLDICL.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Richard Henderson <rth@twiddle.net>
A couple of places where it was easy to identify a right-shift
followed by an extract or and-with-immediate, and the obvious
sign-extract from a high byte register.
Acked-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This is a cleanup patch. It adds call to tcg_temp_free()
when it is missing.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Also manage word and byte operands and fix the computation of
overflow in the case of M68000 arithmetic shifts.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <1478699171-10637-4-git-send-email-rth@twiddle.net>
Report this properly via exception and, importantly, allow
the disassembler the chance to tell us what insn is not handled.
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <1478699171-10637-3-git-send-email-rth@twiddle.net>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
680x0 movem can load/store words and long words and can use more
addressing modes. Coldfire can only use long words with (Ax) and
(d16,Ax) addressing modes.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <1478699171-10637-2-git-send-email-rth@twiddle.net>
Implement CAS using cmpxchg.
Implement CAS2 using helper and either cmpxchg when
the 32bit addresses are consecutive, or with
parallel_cpus+cpu_loop_exit_atomic() otherwise.
Suggested-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Update helper to set the throwing location in case of div-by-0.
Cleanup divX.w and add quad word variants of divX.l.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <rth@twidle.net>
[laurent: modified to clear Z on overflow, as found with risu]
Provide gen_lea_mode and gen_ea_mode, where the mode can be
specified manually, rather than taken from the instruction.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <1478206203-4606-3-git-send-email-rth@twiddle.net>
ARM1176 CPUs have TrustZone support and can use the Vector Base
Address Register, but currently, qemu only adds VBAR support to ARMv7
CPUs. Fix this by adding a new feature ARM_FEATURE_VBAR which can used
for ARMv7 and ARM1176 CPUs.
The VBAR feature is always set for ARMv7 because some legacy boards
require it even if this is not architecturally correct.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-id: 1481810970-9692-1-git-send-email-clg@kaod.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We already log exception entry; add logging of the AArch64 exception
return path as well.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
We add s->be_data within do_vec_ld/st. Adding it here means that
we have the wrong bits set in SIZE for a big-endian host, leading
to g_assert_not_reached in write_vec_element and read_vec_element.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 1481085020-2614-3-git-send-email-rth@twiddle.net
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Since CPUARMState.vfp.regs is not 16 byte aligned, the ^ 8 fixup used
for a big-endian host doesn't do what's intended. Fix this by adding
in the vfp.regs offset after computing the inter-register offset.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 1481085020-2614-2-git-send-email-rth@twiddle.net
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The value of the MVFR1 (Media and VFP Feature Register 1) register for
the Cortex-A8 appears to be incorrect (according to the TRM, DDI0344K),
with the "full denormal arithmetic" and "propagation of NaN" fields
holding both 0 instead of both 1.
I had a go tracing the history of the use of this value, and it seems
it's always just been wrong in QEMU: maybe it was derived from early
documentation, or guessed based on the use of a "VFP Lite" implementation
in the Cortex-A8.
Depending on the startup/early-boot code in use, this can manifest as
failure to perform denormal arithmetic properly: in our case, selecting
a Cortex-A8 CPU when using QEMU as an instruction-set simulator for
bare-metal GCC testing caused tests using denormal arithmetic to
fail. Problems might be masked (or not occur) when using a full OS kernel
with suitable trap handlers (I'm not sure).
Signed-off-by: Julian Brown <julian@codesourcery.com>
Message-id: 1481130858-31767-1-git-send-email-julian@codesourcery.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>