This implements emulation of the new SM4 instructions that have
been added as an optional extension to the ARMv8 Crypto Extensions
in ARM v8.2.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Message-id: 20180207111729.15737-5-ard.biesheuvel@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This implements emulation of the new SM3 instructions that have
been added as an optional extension to the ARMv8 Crypto Extensions
in ARM v8.2.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Message-id: 20180207111729.15737-4-ard.biesheuvel@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This implements emulation of the new SHA-3 instructions that have
been added as an optional extensions to the ARMv8 Crypto Extensions
in ARM v8.2.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Message-id: 20180207111729.15737-3-ard.biesheuvel@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This implements emulation of the new SHA-512 instructions that have
been added as an optional extensions to the ARMv8 Crypto Extensions
in ARM v8.2.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Message-id: 20180207111729.15737-2-ard.biesheuvel@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Handle possible MPU faults, SAU faults or bus errors when
popping register state off the stack during exception return.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1517324542-6607-8-git-send-email-peter.maydell@linaro.org
Make the load of the exception vector from the vector table honour
the SAU and any bus error on the load (possibly provoking a derived
exception), rather than simply aborting if the load fails.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1517324542-6607-7-git-send-email-peter.maydell@linaro.org
Make v7m_push_callee_stack() honour the MPU by using the
new v7m_stack_write() function. We return a flag to indicate
whether the pushes failed, which we can then use in
v7m_exception_taken() to cause us to handle the derived
exception correctly.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1517324542-6607-6-git-send-email-peter.maydell@linaro.org
The memory writes done to push registers on the stack
on exception entry in M profile CPUs are supposed to
go via MPU permissions checks, which may cause us to
take a derived exception instead of the original one of
the MPU lookup fails. We were implementing these as
always-succeeds direct writes to physical memory.
Rewrite v7m_push_stack() to do the necessary checks.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1517324542-6607-5-git-send-email-peter.maydell@linaro.org
In the v8M architecture, if the process of taking an exception
results in a further exception this is called a derived exception
(for example, an MPU exception when writing the exception frame to
memory). If the derived exception happens while pushing the initial
stack frame, we must ignore any subsequent possible exception
pushing the callee-saves registers.
In preparation for making the stack writes check for exceptions,
add a return value from v7m_push_stack() and a new parameter to
v7m_exception_taken(), so that the former can tell the latter that
it needs to ignore failures to write to the stack. We also plumb
the argument through to v7m_push_callee_stack(), which is where
the code to ignore the failures will be.
(Note that the v8M ARM pseudocode structures this slightly differently:
derived exceptions cause the attempt to process the original
exception to be abandoned; then at the top level it calls
DerivedLateArrival to prioritize the derived exception and call
TakeException from there. We choose to let the NVIC do the prioritization
and continue forward with a call to TakeException which will then
take either the original or the derived exception. The effect is
the same, but this structure works better for QEMU because we don't
have a convenient top level place to do the abandon-and-retry logic.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1517324542-6607-4-git-send-email-peter.maydell@linaro.org
Currently armv7m_nvic_acknowledge_irq() does three things:
* make the current highest priority pending interrupt active
* return a bool indicating whether that interrupt is targeting
Secure or NonSecure state
* implicitly tell the caller which is the highest priority
pending interrupt by setting env->v7m.exception
We need to split these jobs, because v7m_exception_taken()
needs to know whether the pending interrupt targets Secure so
it can choose to stack callee-saves registers or not, but it
must not make the interrupt active until after it has done
that stacking, in case the stacking causes a derived exception.
Similarly, it needs to know the number of the pending interrupt
so it can read the correct vector table entry before the
interrupt is made active, because vector table reads might
also cause a derived exception.
Create a new armv7m_nvic_get_pending_irq_info() function which simply
returns information about the highest priority pending interrupt, and
use it to rearrange the v7m_exception_taken() code so we don't
acknowledge the exception until we've done all the things which could
possibly cause a derived exception.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1517324542-6607-3-git-send-email-peter.maydell@linaro.org
In order to support derived exceptions (exceptions generated in
the course of trying to take an exception), we need to be able
to handle prioritizing whether to take the original exception
or the derived exception.
We do this by introducing a new function
armv7m_nvic_set_pending_derived() which the exception-taking code in
helper.c will call when a derived exception occurs. Derived
exceptions are dealt with mostly like normal pending exceptions, so
we share the implementation with the armv7m_nvic_set_pending()
function.
Note that the way we structure this is significantly different
from the v8M Arm ARM pseudocode: that does all the prioritization
logic in the DerivedLateArrival() function, whereas we choose to
let the existing "identify highest priority exception" logic
do the prioritization for us. The effect is the same, though.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1517324542-6607-2-git-send-email-peter.maydell@linaro.org
For now, the kernel does not properly indicate configured CPU subfunctions
to the guest, but simply uses the host values (as support in KVM is still
missing). That's why we missed to model the PTFF subfunctions that come
with Multiple-epoch facility.
Let's properly add these, along with a new feature group.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180205102935.14736-1-david@redhat.com>
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
AEN and AIS can be provided unconditionally, ZPCI should be turned on
manually.
With -cpu qemu,zpci=on, the guest kernel can now successfully detect
virtio-pci devices under tcg.
Also fixup the order of the MSA_EXT_{3,4} flags while at it.
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
On s390x, pci support is implemented via a set of instructions
(no mmio). Unfortunately, none of them are documented in the
PoP; the code is based upon the existing implementation for KVM
and the Linux zpci driver.
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
This avoids tons of conversions when handling interrupts.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180129125623.21729-19-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
This avoids tons of conversions when handling interrupts.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180129125623.21729-17-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Current STSI implementation is a mess, so let's rewrite it.
Problems fixed by this patch:
1) The order of exceptions/when recognized is wrong.
2) We have to store to virtual address space, not absolute.
3) Alignment check of the block is missing.
3) The SMP information is not indicated.
While at it:
a) Make the code look nicer
- get rid of nesting levels
- use struct initialization instead of initializing to zero
- rename a misspelled field and rename function code defines
- use a union and have only one write statement
- use cpu_to_beX()
b) Indicate the VM name/extended name + UUID just like KVM does
c) Indicate that all LPAR CPUs we fake are dedicated
d) Add a comment why we fake being a KVM guest
e) Give our guest as default the name "TCGguest"
f) Fake the same CPU information we have in our Guest for all layers
While at it, get rid of "potential_page_fault()" by forwarding the
retaddr properly.
The result is best verified by looking at "/proc/sysinfo" in the guest
when specifying on the qemu command line
-uuid "74738ff5-5367-5958-9aee-98fffdcd1876" \
-name "extra long guest name"
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180129125623.21729-14-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
All blocks are 4k in size, which is only true for two of them right now.
Also some reserved fields were wrong, fix it and convert all reserved
fields to u8.
This also fixes the LPAR part output in /proc/sysinfo under TCG. (for
now, everything was indicated as 0)
While at it, introduce typedefs for these structs and use them in TCG/KVM
code.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180129125623.21729-13-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Kicking all CPUs on every floating interrupt is far from efficient.
Let's optimize it at least a little bit.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180129125623.21729-12-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Use s390_cpu_virt_mem_write() so we can actually revert what we did
(re-inject the dequeued IO interrupt).
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180129125623.21729-10-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Move floating interrupt handling into the flic. Floating interrupts
will now be considered by all CPUs, not just CPU #0. While at it, convert
I/O interrupts to use a list and make sure we properly consider I/O
sub-classes in s390_cpu_has_io_int().
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180129125623.21729-9-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
This is a preparation for floating interrupt support and only applies to
MTTCG, single threaded TCG works just fine. If a floating interrupt wakes
up a VCPU and the CPU thinks it can run (clearing cs->halted), at
the point where the interrupt would be delivered, already another VCPU
might have picked up the interrupt, resulting in a wakeup without an
interrupt (executing wrong code).
It is wrong to let the VCPU continue to execute (the WAIT PSW). Instead,
we have to put the VCPU back to sleep.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180129125623.21729-8-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Let the flic device handle it internally. This will allow us to later
on store floating interrupts in the flic for the TCG case.
This now also simplifies kvm.c. All that's left is the fallback
interface for floating interrupts, which is now triggered directly via
the flic in case anything goes wrong.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180129125623.21729-6-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We currently only support CRW machine checks. This is a preparation for
real floating interrupt support.
Get rid of the queue and handle it via the bit INTERRUPT_MCHK. We don't
rename it for now, as it will be soon gone (when moving crw machine checks
into the flic).
Please note that this is the same way also KVM handles it: only one
instance of a machine check can be pending at a time. So no need for a
queue.
While at it, make sure we try to deliver only if env->cregs[14]
actually indicates that CRWs are accepted.
Drop two unused defines on the way (we already have PSW_MASK_...).
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180129125623.21729-5-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We have to consider all deliverable interrupts.
We now have to take care of the special scenario, where we first
inject an interrupt with a WAIT PSW, followed by a !WAIT PSW. (very
unlikely but possible)
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180129125623.21729-2-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20180201111846.21846-4-armbru@redhat.com>
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.
This commit was created with scripts/clean-includes, with the change
to target/s390x/gen-features.c manually reverted, and blank lines
around deletions collapsed.
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20180201111846.21846-3-armbru@redhat.com>
System headers should be included with <...>, our own headers with
"...". Offenders tracked down with an ugly, brittle and probably
buggy Perl script. Previous iteration was commit a9c94277f0.
Delete inclusions of "string.h" and "strings.h" instead of fixing them
to <string.h> and <strings.h>, because we always include these via
osdep.h.
Put the cleaned up system header includes first.
While there, separate #include from file comment with exactly one
blank line.
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20180201111846.21846-2-armbru@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Implements the WHPX accelerator cpu enlightenments to actually use the whpx-all
accelerator on Windows platforms.
Signed-off-by: Justin Terry (VM) <juterry@microsoft.com>
Message-Id: <1516655269-1785-5-git-send-email-juterry@microsoft.com>
[Register/unregister VCPU thread with RCU. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Implements the Windows Hypervisor Platform accelerator (WHPX) target. Which
acts as a hypervisor accelerator for QEMU on the Windows platform. This enables
QEMU much greater speed over the emulated x86_64 path's that are taken on
Windows today.
1. Adds support for vPartition management.
2. Adds support for vCPU management.
3. Adds support for MMIO/PortIO.
4. Registers the WHPX ACCEL_CLASS.
Signed-off-by: Justin Terry (VM) <juterry@microsoft.com>
Message-Id: <1516655269-1785-4-git-send-email-juterry@microsoft.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
It's a preparation for follow-up patch to call region_del() in
memory_listener_unregister(), otherwise all device addr attached with
kvm_devices_head will be reset before calling kvm_arm_set_device_addr.
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180122060244.29368-3-peterx@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In order to enable TCE operations support in KVM, we have to inform
the KVM about VFIO groups being attached to specific LIOBNs;
the necessary bits are implemented already by IOMMU MR and VFIO.
This defines get_attr() for the SPAPR TCE IOMMU MR which makes VFIO
call the KVM_DEV_VFIO_GROUP_SET_SPAPR_TCE ioctl and establish
LIOBN-to-IOMMU link.
This changes spapr_tce_set_need_vfio() to avoid TCE table reallocation
if the kernel supports the TCE acceleration.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
[aw - remove unnecessary sys/ioctl.h include]
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
hvf.c and vmx.h contain code from hvdos.c that is released as public domain:
from hvdos github: https://github.com/mist64/hvdos
"License
See LICENSE.txt (2-clause-BSD).
In order to simplify use of this code as a template, you can consider any parts from "hvdos.c" and "interface.h" as being in the public domain."
Signed-off-by: Izik Eidus <izik@veertu.com>
Message-Id: <20180123123639.35255-2-izik@veertu.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We masked the wrong bits, which prevented some of the
32-bit R registers. E.g. "fcnvxf,sgl,sgl fr22R,fr6R".
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This is an extension to the base ISA, but we can use this in
the kernel idle loop to reduce the host cpu time consumed.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
HP-UX 10.20 CD contains "add r0, r0, r27" in a delay slot,
which uses at least 5 temps.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Unknown why this works, but if we return EXCP_ITLB_MISS we
will triple-fault the first userland instruction fetch.
Is it something to do with having a combined I/DTLB?
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Linux sets sr4-sr7 all to the same value, which means that we
need not do any runtime computation to find out what space to
use in forming the GVA.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Real hardware would use an external device to control the power.
But for the moment let's invent instructions in reserved space,
to be used by our custom firmware.
Signed-off-by: Helge Deller <deller@gmx.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
However since HPPA has a software-managed TLB, and the relevant
TLB manipulation instructions are not implemented, this does not
actually do anything.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Any one TB will have only one space value. If we change spaces,
we change TBs. Thus BE and BEV must exit the TB immediately.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
These instructions force the destination privilege level
of the branch destination to be no higher than current.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This changes the system virtual address width to 64-bit and
incorporates the space registers into load/store operations.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
While the E bit is only used for pa2.0 mfctl,w from sar,
the otherwise reserved bit does not appear to be decoded.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Most aspects of privilege are not yet handled. But this
gives us the start from which to begin checking.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
For system mode, we will need 64-bit virtual addresses even when
we have 32-bit register sizes. Since the rest of QEMU equates
TARGET_LONG_BITS with the address size, redefine everything
related to register size in terms of a new TARGET_REGISTER_BITS.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We don't actually do anything with most of the bits yet,
but at least they have names and we have somewhere to
store them.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
With the addition of default-configs/hppa-softmmu.mak, this
will compile. It is not enabled with this patch, however.
Signed-off-by: Helge Deller <deller@gmx.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Add three new kvm capabilities used to represent the level of host support
for three corresponding workarounds.
Host support for each of the capabilities is queried through the
new ioctl KVM_PPC_GET_CPU_CHAR which returns four uint64 quantities. The
first two, character and behaviour, represent the available
characteristics of the cpu and the behaviour of the cpu respectively.
The second two, c_mask and b_mask, represent the mask of known bits for
the character and beheviour dwords respectively.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
[dwg: Correct some compile errors due to name change in final kernel
patch version]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
and introduce SFC and DFC control registers.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180118193846.24953-6-laurent@vivier.eu>
The instruction "moves" can select source and destination
address space (user or kernel). This patch modifies
all the load/store functions to be able to provide
the address space the caller wants to use instead
of using the current one. All the callers are modified
to provide the default address space to these functions.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180118193846.24953-5-laurent@vivier.eu>
Only add MC68040 MMU page table processing and related
registers (Special Status Word, Translation Control Register,
User Root Pointer and Supervisor Root Pointer).
Transparent Translation Registers, DFC/SFC and pflush/ptest
will be added later.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180118193846.24953-3-laurent@vivier.eu>
The MC68040 MMU provides the size of the access that
triggers the page fault.
This size is set in the Special Status Word which
is written in the stack frame of the access fault
exception.
So we need the size in m68k_cpu_unassigned_access() and
m68k_cpu_handle_mmu_fault().
To be able to do that, this patch modifies the prototype of
handle_mmu_fault handler, tlb_fill() and probe_write().
do_unassigned_access() already includes a size parameter.
This patch also updates handle_mmu_fault handlers and
tlb_fill() of all targets (only parameter, no code change).
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180118193846.24953-2-laurent@vivier.eu>
t64 is also unconditionally freed after the switch () { ... }
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Thomas Huth <huth@tuxfamily.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180119114444.7590-1-laurent@vivier.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180119045438.28582-16-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180119045438.28582-15-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180119045438.28582-14-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Not enabled anywhere so far.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180119045438.28582-11-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Helpers that return a pointer into env->vfp.regs so that we isolate
the logic of how to index the regs array for different cpu modes.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180119045438.28582-7-richard.henderson@linaro.org
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
All direct users of this field want an integral value. Drop all
of the extra casting between uint64_t and float64.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180119045438.28582-6-richard.henderson@linaro.org
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rather than passing a regno to the helper, pass pointers to the
vector register directly. This eliminates the need to pass in
the environment pointer and reduces the number of places that
directly access env->vfp.regs[].
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180119045438.28582-5-richard.henderson@linaro.org
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rather than passing regnos to the helpers, pass pointers to the
vector registers directly. This eliminates the need to pass in
the environment pointer and reduces the number of places that
directly access env->vfp.regs[].
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180119045438.28582-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rather than passing regnos to the helpers, pass pointers to the
vector registers directly. This eliminates the need to pass in
the environment pointer and reduces the number of places that
directly access env->vfp.regs[].
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180119045438.28582-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
If it isn't used when translate.h is included,
we'll get a compiler Werror.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180119045438.28582-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Commit ("3b39d734141a target/arm: Handle page table walk load failures
correctly") modified both versions of the page table walking code (i.e.,
arm_ldl_ptw and arm_ldq_ptw) to record the result of the translation in
a temporary 'data' variable so that it can be inspected before being
returned. However, arm_ldq_ptw() returns an uint64_t, and using a
temporary uint32_t variable truncates the upper bits, corrupting the
result. This causes problems when using more than 4 GB of memory in
a TCG guest. So use a uint64_t instead.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Message-id: 20180119194648.25501-1-ard.biesheuvel@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Coverity warnings CID 1385146, 1385148 1385149 and 1385150 point that
xtensa_opcode_num_operands and xtensa_format_num_slots may return -1
even when xtensa_opcode_decode and xtensa_format_decode succeed. In that
case unsigned counters used to iterate through operands/slots will not
do the right thing.
Make counters and loop bounds signed to fix the warnings.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
The sample_controller core is a simple noMMU general purpose core, modern
analog of de212. It is used as a default core in the xtensa port of
Zephyr.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Define default core for noMMU configurations and use that core as
machine default with noMMU XTFPGA machines.
This is done to avoid offering non-working configuration (MMU core on a
noMMU machine) as a default.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
stfle.81 (ppa15) is a transparent facility that can be passed to the
guest without the need to implement hypervisor support. As this feature
can be provided by firmware we add it to all full models.
Cc: qemu-stable@nongnu.org
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-Id: <20180118085628.40798-4-borntraeger@de.ibm.com>
Reviewed-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We need to handle the bpb control on reset and migration. Normally
stfle.82 is transparent (and the normal guest part works without
hypervisor activity). To prevent any issues we require full
host kernel support for this feature.
Cc: qemu-stable@nongnu.org
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-Id: <20180118085628.40798-3-borntraeger@de.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
[CH: 'Branch Prediction Blocking' -> 'Branch prediction blocking']
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
CC == 2 can only happen due to a protection exception, not if memory is
not available (PGM_ADDRESSING). So all PGM_ADDRESSING exceptions have to
be forwarded to the guest.
Since the initial definition of TEST PROTECTION, we now read globals
(e.g. PSW mask), so we have to correctly mark the instruction
(otherwise, e.g. booting fedora 27 fails).
Also, the architecture explicitly specifies which exceptions are
forwarded to the guest, this makes the code a little nicer.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180112125452.8569-1-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Linux uses TEST PROTECTION to sense for available memory locations.
Let's implement what we can for now (just as for the other instructions,
excluding AR mode and special protection mechanisms).
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171218224616.21030-2-david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The hypervisor doorbells are used by skiboot and Linux on POWER9
processors to wake up secondaries.
This adds processor control support to the Server architecture by
reusing the Embedded support. They are very similar, only the bits
definition of the CPU identifier differ.
Still to be done is message broadcast to all threads of the same
processor.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We know that only one bit (in addition to SO) is going to be set in
the condition register, so do two movconds instead of three setconds,
three shifts and two ORs.
For ppc64-linux-user, the code size reduction is around 5% and the
performance improvement slightly less than 10%. For softmmu, the
improvement is around 5%.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
commit f03a1af581 ("ppc: Fix POWER7 and POWER8 exception definitions")
introduced definitions for the server doorbell exceptions by reusing
the embedded definitions but this adds complexity in the powerpc_excp()
routine. Let's introduce specific definitions for the Server doorbells
exception.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Highlight: new CPU models that expose CPU features that guests
can use to mitigate CVE-2017-5715 (Spectre variant #2).
-----BEGIN PGP SIGNATURE-----
iQIcBAABCAAGBQJaX/+jAAoJECgHk2+YTcWmlzIP/i0oKKTtMccOozXQ8XbxfGs/
Ek+k1joJSBRixUEB+hHHLraRmtw0b94R6uWXRF1KK9CPD06annHdr4tOAsryrQmp
/lJfs7weGKi8o4Jz/YJW83NzNdNie0XloiS3+JGfu8fRh2EJDW3lv0j2CT3ytRlf
rbal/j2E8lsmSsdL1lGbwb3E3DWDWIesWOGQMd3tu3WiMBMSgDqZa8RZo7hNiRsE
7Vdj2yAWuj3vKRLSipIsSSOimr2P1hZsCMP2CI43BIvl6gW1S5ymExEppLNxruH6
mqjAC96It3kqEZHVMPJg4evhwZitNxgqGtgrEbVfeZj+DTO/ZP6X6pcqtLdPA553
dMrspDkYgU/OvE1ZQSMEXUm2IDt6fmpRiC4LvkWjMkvOOADIIBzL6LTzBd4k6fZ2
hxQi+nc/IrIkQpq3f51YRVxwOs8otTBJzyqokxRvB3tOhg/I+NMxCvz5dyRjj5sN
33eVdIuyndHiPTyvvv8eCjFeQG+wFFptPXMUhUEvJvQobJ/ZW76E+On8Kz3aYEF8
lz++g3HvN7b7YPx3fqAvRfX/nZtDt04MDXvvnccXRt55Cn8tblQ92y84Wjc84SNZ
lkgKhl4uOg6k7A1TblIhrk93eew/hSqaW8R8+y6qTUMkS6teAFsMrT0BSKETi1do
GWTTbgH/3OECAQYFopBz
=GtpX
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/ehabkost/tags/x86-pull-request' into staging
x86 queue, 2018-01-17
Highlight: new CPU models that expose CPU features that guests
can use to mitigate CVE-2017-5715 (Spectre variant #2).
# gpg: Signature made Thu 18 Jan 2018 02:00:03 GMT
# gpg: using RSA key 0x2807936F984DC5A6
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>"
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6
* remotes/ehabkost/tags/x86-pull-request:
i386: Add EPYC-IBPB CPU model
i386: Add new -IBRS versions of Intel CPU models
i386: Add FEAT_8000_0008_EBX CPUID feature word
i386: Add spec-ctrl CPUID bit
i386: Add support for SPEC_CTRL MSR
i386: Change X86CPUDefinition::model_id to const char*
target/i386: add clflushopt to "Skylake-Server" cpu model
pc: add 2.12 machine types
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
EPYC-IBPB is a copy of the EPYC CPU model with
just CPUID_8000_0008_EBX_IBPB added.
Cc: Jiri Denemark <jdenemar@redhat.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20180109154519.25634-7-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
The new MSR IA32_SPEC_CTRL MSR was introduced by a recent Intel
microcode updated and can be used by OSes to mitigate
CVE-2017-5715. Unfortunately we can't change the existing CPU
models without breaking existing setups, so users need to
explicitly update their VM configuration to use the new *-IBRS
CPU model if they want to expose IBRS to guests.
The new CPU models are simple copies of the existing CPU models,
with just CPUID_7_0_EDX_SPEC_CTRL added and model_id updated.
Cc: Jiri Denemark <jdenemar@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20180109154519.25634-6-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Add the new feature word and the "ibpb" feature flag.
Based on a patch by Paolo Bonzini.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20180109154519.25634-5-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Add the feature name and a CPUID_7_0_EDX_SPEC_CTRL macro.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20180109154519.25634-4-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
It is valid to have a 48-character model ID on CPUID, however the
definition of X86CPUDefinition::model_id is char[48], which can
make the compiler drop the null terminator from the string.
If a CPU model happens to have 48 bytes on model_id, "-cpu help"
will print garbage and the object_property_set_str() call at
x86_cpu_load_def() will read data outside the model_id array.
We could increase the array size to 49, but this would mean the
compiler would not issue a warning if a 49-char string is used by
mistake for model_id.
To make things simpler, simply change model_id to be const char*,
and validate the string length using an assert() on
x86_register_cpudef_type().
Reported-by: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20180109154519.25634-2-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
CPUID_7_0_EBX_CLFLUSHOPT is missed in current "Skylake-Server" cpu
model. Add it to "Skylake-Server" cpu model on pc-i440fx-2.12 and
pc-q35-2.12. Keep it disabled in "Skylake-Server" cpu model on older
machine types.
Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
Message-Id: <20171219033730.12748-3-haozhong.zhang@intel.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
When overwritting a valid TLB entry with a new one, the previous page
were not flushed in QEMU TLB, leading to incoherent mapping. This commit
fixes this.
Signed-off-by: Luc MICHEL <luc.michel@git.antfield.fr>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We recently had some discussions that were sidetracked for a while, because
nearly everyone misapprehended the purpose of the 'max_threads' field in
the compatiblity modes table. It's all about guest expectations, not host
expectations or support (that's handled elsewhere).
In an attempt to avoid a repeat of that confusion, rename the field to
'max_vthreads' and add an explanatory comment.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Increases the max smt mode to 8 for Power9. That's because KVM supports
smt emulation in this platform so QEMU should allow users to use it as
well.
Today if we try to pass -smp ...,threads=8, QEMU will silently truncate
it to smt4 mode and may cause a crash if we try to perform a cpu
hotplug.
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
[dwg: Added an explanatory comment]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
When constructing the "host" cpu class we modify whether the VMX and VSX
vector extensions and DFP (Decimal Floating Point) are available
based on whether KVM can support those instructions. This can depend on
policy in the host kernel as well as on the actual host cpu capabilities.
However, the way we probe for this is not very nice: we explicitly check
the host's device tree. That works in practice, but it's not really
correct, since the device tree is a property of the host kernel's platform
which we don't really know about. We get away with it because the only
modern POWER platforms happen to encode VMX, VSX and DFP availability in
the device tree in the same way.
Arguably we should have an explicit KVM capability for this, but we haven't
needed one so far. Barring specific KVM policies which don't yet exist,
each of these instruction classes will be available in the guest if and
only if they're available in the qemu userspace process. We can determine
that from the ELF AUX vector we're supplied with.
Once reworked like this, there are no more callers for kvmppc_get_vmx() and
kvmppc_get_dfp() so remove them.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
As stated in the 1ad9f0a464 commit log, the returned entries are not
a whole PTEG. It was not a problem before 1ad9f0a464 as it would read
a single record assuming it contains a whole PTEG but now the code tries
reading the entire PTEG and "if ((n - i) < invalid)" produces negative
values which then are converted to size_t for memset() and that throws
seg fault.
This fixes the math.
While here, fix the last @i increment as well.
Fixes: 1ad9f0a464 "target/ppc: Fix KVM-HV HPTE accessors"
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The point of writing a macro embedded in a 'do { ... } while (0)'
loop (particularly if the macro has multiple statements or would
otherwise end with an 'if' statement) is so that the macro can be
used as a drop-in statement with the caller supplying the
trailing ';'. Although our coding style frowns on brace-less 'if':
if (cond)
statement;
else
something else;
that is the classic case where failure to use do/while(0) wrapping
would cause the 'else' to pair with any embedded 'if' in the macro
rather than the intended outer 'if'. But conversely, if the macro
includes an embedded ';', then the same brace-less coding style
would now have two statements, making the 'else' a syntax error
rather than pairing with the outer 'if'. Thus, even though our
coding style with required braces is not impacted, ending a macro
with ';' makes our code harder to port to projects that use
brace-less styles.
The change should have no semantic impact. I was not able to
fully compile-test all of the changes (as some of them are
examples of the ugly bit-rotting debug print statements that are
completely elided by default, and I didn't want to recompile
with the necessary -D witnesses - cleaning those up is left as a
bite-sized task for another day); I did, however, audit that for
all files touched, all callers of the changed macros DID supply
a trailing ';' at the callsite, and did not appear to be used
as part of a brace-less conditional.
Found mechanically via: $ git grep -B1 'while (0);' | grep -A1 \\\\
Signed-off-by: Eric Blake <eblake@redhat.com>
Acked-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20171201232433.25193-7-eblake@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
It is more typical to provide the ';' by the caller of a macro
than to embed it in the macro itself; this is because syntax
highlight engines can get confused if a macro is called without
a semicolon before the closing '}'.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20171201232433.25193-3-eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
entry is moved from list but is not freed.
Signed-off-by: linzhecheng <linzhecheng@huawei.com>
Message-Id: <20171225024704.19540-1-linzhecheng@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
x86_update_hflags reference env->efer which is updated in hax_get_msrs,
so it has to be called after hax_get_msrs. This fix the bug that sometimes
dump_state show 32 bits regs even in 64 bits mode.
Signed-off-by: Tao Wu <lepton@google.com>
Message-Id: <20180110195056.85403-3-lepton@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Change to use x86_update_hflags instead of keeping another copy
at hax side. This also fix bug like HF_CPL_MASK should be SS.DPL,
not CS.DPL.
Signed-off-by: Tao Wu <lepton@google.com>
Message-Id: <20180110195056.85403-2-lepton@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We will share the same code for hax/kvm.
Signed-off-by: Tao Wu <lepton@google.com>
Message-Id: <20180110195056.85403-1-lepton@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20180110063337.21538-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20180110063337.21538-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Instead of ignoring the response from address_space_ld*()
(indicating an attempt to read a page table descriptor from
an invalid physical address), use it to report the failure
correctly.
Since this is another couple of locations where we need to
decide the value of the ARMMMUFaultInfo ea bit based on a
MemTxResult, we factor out that operation into a helper
function.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For PMSAv7, the v7A/R Arm ARM defines that setting AP to 0b111
is an UNPREDICTABLE reserved combination. However, for v7M
this value is documented as having the same behaviour as 0b110:
read-only for both privileged and unprivileged. Accept this
value on an M profile core rather than treating it as a guest
error and a no-access page.
Reported-by: Andy Gross <andy.gross@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1512742402-31669-1-git-send-email-peter.maydell@linaro.org
Some older versions of gcc complain if a typedef is defined twice:
target/xtensa/translate.c:81: error: redefinition of typedef 'DisasContext'
target/xtensa/cpu.h:339: note: previous declaration of 'DisasContext' was here
Remove the now-redundant typedef from the definition of the struct in
translate.c.
Reported-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1515762528-22818-1-git-send-email-peter.maydell@linaro.org
Certain PMU-related MSRs are not supported for CPUs with PMU
architecture below version 2. KVM rejects any access to them (see
intel_is_valid_msr_idx routine in KVM), and QEMU fails on the following
assertion:
kvm_put_msrs: Assertion `ret == cpu->kvm_msr_buf->nmsrs' failed.
QEMU also could fail if KVM exposes less fixed counters then 3. It could
happen if host system run inside another hypervisor, which is tweaking
PMU-related CPUID. To prevent possible fail, number of fixed counters now is
obtained in the same way as number of GP counters.
Reviewed-by: Roman Kagan <rkagan@virtuozzo.com>
Signed-off-by: Jan Dakinevich <jan.dakinevich@virtuozzo.com>
Message-Id: <1514383466-7257-1-git-send-email-jan.dakinevich@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
DE212 is a noMMU core supported in linux. Import this core to provide
true noMMU configuration for xtensa linux to run on QEMU.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Refactor disas_thumb2_insn() so that it generates the code for raising
an UNDEF exception for invalid insns, rather than returning a flag
which the caller must check to see if it needs to generate the UNDEF
code. This brings the function in to line with the behaviour of
disas_thumb_insn() and disas_arm_insn().
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1513080506-17703-1-git-send-email-peter.maydell@linaro.org
ldxp loads two consecutive doublewords from memory regardless of CPU
endianness. On store, stlxp currently assumes to work with a 128bit
value and consequently switches order in big-endian mode. With this
change it packs the doublewords in reverse order in anticipation of the
128bit big-endian store operation interposing them so they end up in
memory in the right order. This makes it work for both MTTCG and !MTTCG.
It effectively implements the ARM ARM STLXP operation pseudo-code:
data = if BigEndian() then el1:el2 else el2:el1;
With this change an aarch64_be Linux 4.14.4 kernel succeeds to boot up
in system emulation mode.
Signed-off-by: Michael Weiser <michael.weiser@gmx.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This pull request supersedes ppc-for-2.12-20180108 and several before
it. The earlier pull request included a patch which exposed a bug in
the ARM TCG backend. I've pulled that out and will repost once the
ARM bug is fixed (a patch has been posted by Richard Henderson).
Higlights from this series:
* SLOF update
* Several new devices for embedded platforms
* Fix to correctly set compatiblity mode for hotplugged CPUs
* dtc compile fix for older MacOS versions
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAlpW7uMACgkQbDjKyiDZ
s5JBRQ//Ybt5KgnY0WVEJDVjeIuNBJUD6brVSIYr39tQPe1XdLgeVESxY8NFHy/A
+vEuTXeneJw6ShkfQFoyvMKpMi/vUmdCW9I7JL0VSFL1DlnpQqonH2EXUWRR4ox9
DF54+Q/KUFyUS3ENN5FSLDSYKhHZ2lgS5ViNuk5rmOlsrfEsjwqi5hCyMN7DXDv+
XY/kv2WWLHtXx6W8ci42jYeTDXnLTA2qLh2pCywakJa3vJkmxkBedotBOBA4A2lo
ThhwwPqBN1Ui0mR5faVXRAnzOYv2bduv4srdtiYmaWESDx6iDmBcVIedbI/ls7ux
xikU5ix/GGfX74Bg/mrxGC4+i6mc0lifyGMKyyRle3lD1KrMUuI8ceGuxpzNENgQ
uwpAnnLx6wwLk2BSsBGz7nXIwI5ZKVJf0u/zVjKkIh4BDn/nDTkPqM8aKweG+XbY
1ahJp0mlmvBbPLWdiK+bmJR453tlvSLp+Xk/YmIw0g+9tORS6ET2StH5InrM04/J
in2aQ1Tf7cOu5F+emg11UY33l2MZ6hgKcqMbRi2wGDtSTBVe2VUkXRKz6oKsTvXk
Yx12+DweC1oK3Gmw/qv/xs/QnrMp7Au50jYHvpLEY7MuHSG2CdmP8hiCYP6HGi0W
ZhF3khXlZ/Dw7Rkq6W3TGUyTRXhDoI73SB716SbScSgSluEzovs=
=W8lr
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-2.12-20180111' into staging
ppc patch queue 2018-01-11
This pull request supersedes ppc-for-2.12-20180108 and several before
it. The earlier pull request included a patch which exposed a bug in
the ARM TCG backend. I've pulled that out and will repost once the
ARM bug is fixed (a patch has been posted by Richard Henderson).
Higlights from this series:
* SLOF update
* Several new devices for embedded platforms
* Fix to correctly set compatiblity mode for hotplugged CPUs
* dtc compile fix for older MacOS versions
# gpg: Signature made Thu 11 Jan 2018 04:58:11 GMT
# gpg: using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-2.12-20180111:
spapr: Correct compatibility mode setting for hotplugged CPUs
hw/ppc: Remove the deprecated spapr-pci-vfio-host-bridge device
Update dtc to fix compilation problem on Mac OS 10.6
target/ppc: more use of the PPC_*() macros
ppc/pnv: change powernv_ prefix to pnv_ for overall naming consistency
hw/ide: Emulate SiI3112 SATA controller
spapr_pci: use warn_report()
ppc4xx_i2c: Implement basic I2C functions
sm501: Add some more unimplemented registers
sm501: Add panel hardware cursor registers also to read function
pseries: Update SLOF firmware image to qemu-slof-20171214
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
-----BEGIN PGP SIGNATURE-----
iQFSBAABCAA8FiEEzGIauY6CIA2RXMnEW8LFb64PMh8FAlpVPkYeHG1hcmsuY2F2
ZS1heWxhbmRAaWxhbmRlLmNvLnVrAAoJEFvCxW+uDzIf27MIAIxw7dIYn9ez/uNv
7iQpTp+aJjEnPhsjcshfzHfPej7d1h6ot6midy75hKb3NfyOG3RN23N5mzK4Mzjf
ybHtXhTjYJl5gndaM0jCdaU5EYDq3BU6kkXS3WJy2hNayfFkRpeLWBR7pdxAGrP3
bp1r064tl3sA8ALYVWFyldgf3o2AuJSxjDFRgbRRIbX1KRLnMwB2gM7ix4FCykcK
YVIG113J4BAkTuD9vfBRz2f/Gs+zdqjprFVGccyej70qvhjnW7bgL78uYvHMzbST
CuLULx9v3es8/s7fd1GSxZj45YTkivUPzFI4n2I0qWApTcJVBoGqj5f8EvwD/y67
A4eiFAQ=
=EPl3
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/mcayland/tags/qemu-sparc-signed' into staging
qemu-sparc update
# gpg: Signature made Tue 09 Jan 2018 22:12:22 GMT
# gpg: using RSA key 0x5BC2C56FAE0F321F
# gpg: Good signature from "Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>"
# Primary key fingerprint: CC62 1AB9 8E82 200D 915C C9C4 5BC2 C56F AE0F 321F
* remotes/mcayland/tags/qemu-sparc-signed: (25 commits)
sun4u_iommu: add trace event for IOMMU translations
sun4u_iommu: convert from IOMMU_DPRINTF to trace-events
sun4u_iommu: update to reflect IOMMU is no longer part of the APB device
sun4u: split IOMMU device out from apb.c to sun4u_iommu.c
apb: QOMify IOMMU
sun4m: remove include/hw/sparc/sun4m.h and all references to it
sun4m: move IOMMU declarations from sun4m.h to sun4m_iommu.h
sun4m: move sun4m_iommu.c from hw/dma to hw/sparc
sun4u: switch from EBUS_DPRINTF() macro to trace-events
sparc64: introduce trace-events for hw/sparc64
apb: replace OBIO interrupt numbers in pci_pbmA_map_irq() with constants
ebus: wire up OBIO interrupts to APB pbm via qdev GPIOs
apb: remove busA property from PBMPCIBridge state
apb: split pci_pbm_map_irq() into separate functions for bus A and bus B
apb: remove pci_apb_init() and instantiate APB device using qdev
apb: move the two secondary PCI bridges objects into APBState
apb: use gpios to wire up the apb device to the SPARC CPU IRQs
apb: return APBState from pci_apb_init() rather than PCIBus
apb: APB QOMify tidy-up
sun4u: move initialisation of all ISABus devices into ebus_realize()
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Also introduce utilities to manipulate bitmasks (originaly from OPAL)
which be will be used in the model of the XIVE interrupt controller.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This code is preventing the MMU debug code from displaying virtual
mappings of IO devices (anything that is not located in the RAM).
Before this patch, Qemu would output 0xffffffffffffffff (-1) as the
physical address corresponding to an IO device virtual address.
With this patch the intended physical address is displayed.
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
const16 is an opcode that shifts 16 lower bits of an address register
to the 16 upper bits and puts its immediate operand into the lower 16
bits. It is not controlled by an Xtensa option and doesn't have a fixed
opcode.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
GPIO32 is not in the core ISA, but it was widely used in Diamond Cores.
This implementation doesn't do actual I/O and doesn't handle the case of
GPIO32 state being a part of coprocessor.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Add two special registers: MMID and DDR:
- MMID is write-only and the only side effect of writing to it is output
to the trace port, which is not emulated;
- DDR is only accessible in debug mode, which is not emulated.
Add two debug-mode-only opcodes:
- rfdd and rfdo do return from the debug mode, which is not emulated.
Add three internal opcodes for full MMU:
- hwwdtlba and hwwitlba are the internal opcodes that write a value into
autoupdate DTLB or ITLB entry.
- ldpte is internal opcode that loads PTE entry that covers the most
recent page fault address.
None of these three opcodes may appear in a valid instruction.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
It doesn't help much, always-set bit 0 of the LITBASE SR is easy to
compensate with decrement of the l32r immediate argument.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Replace manual opcode analysis with libisa-based code. This makes it
possible to support variable-encoding instructions of the core ISA, like
const16, and will allow to support advanced Xtensa features, like FLIX
and TIE.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Display correctly the Trace bits for 680x0
(2 bits instead of 1 for Coldfire).
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180104012913.30763-18-laurent@vivier.eu>
Add the third stack pointer, the Interrupt Stack Pointer (ISP)
(680x0 only). This stack will be needed in softmmu mode.
Update movec to set/get the value of the three stacks.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180104012913.30763-17-laurent@vivier.eu>
Some cleanup, and allows SR to be moved from any addressing mode.
Previous code was wrong for coldfire: coldfire also allows to
use addressing mode to set SR/CCR. It only supports Data register
to get SR/CCR (move from)
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180104012913.30763-15-laurent@vivier.eu>
The following patches will be clearer if we move
functions before adding new ones.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180104012913.30763-14-laurent@vivier.eu>
The instruction traps if the CPU is not in
Supervisor state but the helper is empty because
there is no easy way to reset all the peripherals
without resetting the CPU itself.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180104012913.30763-12-laurent@vivier.eu>
Add cache lines invalidate and cache lines push
as no-op operations, as we don't have cache.
These instructions are 68040 only.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180104012913.30763-11-laurent@vivier.eu>
move16 moves the source line to the destination line. Lines are aligned
to 16-byte boundaries and are 16 bytes long.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180104012913.30763-9-laurent@vivier.eu>
chk and chk2 compare a value to boundaries, and
trigger a CHK exception if the value is out of bounds.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180104012913.30763-8-laurent@vivier.eu>
Display the interrupts/exceptions information
in QEMU logs (-d int)
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180104012913.30763-6-laurent@vivier.eu>
As gen_helper_get_ccr() is able to compute CCR from cc_op and
flags, we don't need to flush flags before to call it.
flush_flags() and get_ccr() use COMPUTE_CCR() to compute
flags. get_ccr() computes CCR value,
whereas flush_flags update live cc_op and flags.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180104012913.30763-3-laurent@vivier.eu>
And remove update_cc_op() from gen_exception() because there is
one in gen_jmp_im().
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180104012913.30763-2-laurent@vivier.eu>
With no fixed array allocation, we can't overflow a buffer.
This will be important as optimizations related to host vectors
may expand the number of ops used.
Use QTAILQ to link the ops together.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
These are now trivial sets and tests against NULL. Unwrap.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We should not exit unless moxie_cpu_handle_mmu_fault has failed.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
cpu_restore_state officially supports being passed an address it can't
resolve the state for. As a result the checks in the helpers are
superfluous and can be removed. This makes the code consistent with
other users of cpu_restore_state.
Of course this does nothing to address what to do if cpu_restore_state
can't resolve the state but so far it seems this is handled elsewhere.
The change was made with included coccinelle script.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
[rth: Fixed up comment indentation. Added second hunk to script to
combine cpu_restore_state and cpu_loop_exit.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Rather than unsupported situations, some VM_PANIC calls actually
are caused by internal errors. Convert them to just abort.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch injects a GP fault when the guest vmexit's by executing a
vmcall instruction.
Signed-off-by: Sergio Andres Gomez Del Real <Sergio.G.DelReal@gmail.com>
Message-Id: <20170913090522.4022-15-Sergio.G.DelReal@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch refactors the event-injection code for hvf by using the
appropriate fields already provided by CPUX86State. At vmexit, it fills
these fields so that hvf_inject_interrupts can just retrieve them without
calling into hvf.
Signed-off-by: Sergio Andres Gomez Del Real <Sergio.G.DelReal@gmail.com>
Message-Id: <20170913090522.4022-14-Sergio.G.DelReal@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch implements setting the tracking of dirty vga pages, using hvf's
interface to protect guest memory. It uses the MemoryListener callback
mechanism through .log_start/stop/sync
Signed-off-by: Sergio Andres Gomez Del Real <Sergio.G.DelReal@gmail.com>
Message-Id: <20170913090522.4022-13-Sergio.G.DelReal@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch generalizes some code in cpu.c for hypervisor-based
accelerators, calling the new hvf_get_supported_cpuid where
KVM used kvm_get_supported_cpuid.
Signed-off-by: Sergio Andres Gomez Del Real <Sergio.G.DelReal@gmail.com>
Message-Id: <20170913090522.4022-12-Sergio.G.DelReal@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>