Adds registration and get/set functions for enabling/disabling the AArch64
execution state on AArch64 CPUs. By default AArch64 execution state is enabled
on AArch64 CPUs, setting the property to off, will disable the execution state.
The below QEMU invocation would have AArch64 execution state disabled.
$ ./qemu-system-aarch64 -machine virt -cpu cortex-a57,aarch64=off
Also adds stripping of features from CPU model string in acquiring the ARM CPU
by name.
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1423736974-14254-2-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This patch implements a fucntion pointer "virtio_is_big_endian"
from "CPUClass" structure for arm/arm64.
Function arm_cpu_is_big_endian() is added to determine and
return the guest cpu endianness to virtio.
This is required for running cross endian guests with virtio on ARM/ARM64.
Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Message-id: 1423130382-18640-3-git-send-email-pranavkumar@linaro.org
[PMM: check CPSR_E in env->cpsr_uncached, not env->pstate.]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Update to arm_cpu_reset() to reset into the highest available exception level
based on the set ARM features.
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1422029835-4696-4-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Added a "has_el3" state property to the ARMCPU descriptor. This property
indicates whether the ARMCPU has security extensions enabled (EL3) or not.
By default it is disabled at this time.
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1418684992-8996-10-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add an unset_feature() function to compliment the set_feature() function. This
will be used to disable functions after they have been enabled during
initialization.
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1418684992-8996-9-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When EL3 is running in AArch32 (or ARMv7 with Security Extensions)
IFAR and DFAR have a secure and a non-secure instance.
Signed-off-by: Fabian Aggeler <aggelerf@ethz.ch>
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1416242878-876-22-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The M profile cpu_exec_interrupt handling is fairly simple
but does include an M profile specific oddity (disabling
interrupts for certain PC values). A/R profile handling
on the other hand is getting rapidly more complicated
with the support for EL2 and EL3. Split the M profile
code out into its own implementation of cpu_exec_interrupt
to keep these two things out of each others' way.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1414684132-23971-2-git-send-email-peter.maydell@linaro.org
The DZP bit in the DCZID system register should be set if
the control bits which prohibit use of the DC ZVA instruction
have been set (it stands for Data Zero Prohibited). However
we had the sense of the test inverted; fix this so that the
bit reads correctly.
To avoid this regressing the behaviour of the user-mode
emulator, we must set the DZE bit in the SCTLR for that
config so that userspace continues to see DZP as zero (it
was getting the correct result by accident previously).
Reported-by: Christopher Covington <cov@codeaurora.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Christopher Covington <cov@codeaurora.org>
Message-id: 1412959792-20708-1-git-send-email-peter.maydell@linaro.org
Add support for handling PSCI calls in system emulation. Both version
0.1 and 0.2 of the PSCI spec are supported. Platforms can enable support
by setting the "psci-conduit" QOM property on the cpus to SMC or HVC
emulation and having a PSCI binding in their dtb.
Signed-off-by: Rob Herring <rob.herring@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1412865028-17725-7-git-send-email-peter.maydell@linaro.org
[PMM: made system reset/off PSCI functions power down the CPU so
we obey the PSCI API requirement never to return from them;
rearranged how the code is plumbed into the exception system,
so that we split "is this a valid call?" from "do the call"]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
User mode emulation should never get interrupts and thus should not
use the system emulation exception handler function. Remove the reference,
and '#ifndef USER_MODE_ONLY' the function itself as well, so that we can add
system mode only functionality to it.
Signed-off-by: Rob Herring <rob.herring@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1412865028-17725-3-git-send-email-peter.maydell@linaro.org
Add tracking of cpu power state in order to support powering off of
cores in system emuluation. The initial state is determined by the
start-powered-off QOM property.
Signed-off-by: Rob Herring <rob.herring@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1412865028-17725-2-git-send-email-peter.maydell@linaro.org
GDB assumes that watchpoint set via the gdbstub remote protocol will
behave in the same way as hardware watchpoints for the target. In
particular, whether the CPU stops with the PC before or after the insn
which triggers the watchpoint is target dependent. Allow guest CPU
code to specify which behaviour to use. This fixes a bug where with
guest CPUs which stop before the accessing insn GDB would manually
step forward over what it thought was the insn and end up one insn
further forward than it should be.
We set this flag for the CPU architectures which set
gdbarch_have_nonsteppable_watchpoint in gdb 7.7:
ARM, CRIS, LM32, MIPS and Xtensa.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Max Filippov <jcmvbkbc@gmail.com>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Michael Walle <michael@walle.cc> (for lm32)
Message-id: 1410545057-14014-1-git-send-email-peter.maydell@linaro.org
This only implements the external delivery method via the GIC.
Acked-by: Greg Bellows <greg.bellows@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1411718914-6608-12-git-send-email-edgar.iglesias@gmail.com
[PMM: adjusted following cpu-exec refactoring]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1411718914-6608-5-git-send-email-edgar.iglesias@gmail.com
[PMM: updated to account for recent cpu-exec refactoring]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
At the moment we try to handle c15_cpar with the strategy of:
* emit generated code which makes assumptions about its value
* when the register value changes call tb_flush() to throw
away the now-invalid generated code
This works because XScale CPUs are always uniprocessor, but
it's confusing because it suggests that the same approach can
be taken for other registers. It also means we do a tb_flush()
on CPU reset, which makes multithreaded linux-user binaries
even more likely to fail than would otherwise be the case.
Replace it with a combination of TB flags for the access
checks done on cp0/cp1 for the XScale and iwMMXt instructions,
plus a runtime check for cp2..cp13 coprocessor accesses.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1411056959-23070-1-git-send-email-peter.maydell@linaro.org
This patch adds support for setting guest breakpoints
based on values the guest writes to the DBGBVR and DBGBCR
registers. (It doesn't include the code to handle when
these breakpoints fire, so has no guest-visible effect.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1410523465-13400-2-git-send-email-peter.maydell@linaro.org
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 1410626734-3804-15-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Implement support for setting QEMU watchpoints based on the
values the guest writes to the ARM architected watchpoint
registers. (We do not yet report the firing of the watchpoints
to the guest, so they will just be ignored.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Fix a single misindented line in arm_cpu_reset().
Signed-off-by: Martin Galvan <martin.galvan@tallertechnologies.com>
[PMM: split this out from the previous commit]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When calling qemu_system_reset after startup on a Cortex-M
CPU, the initial values of PC, MSP and the Thumb bit weren't being set
correctly if the vector table was in ROM. In particular, since Thumb was 0, a
Usage Fault would arise immediately after trying to execute any instruction
on a Cortex-M.
Signed-off-by: Martin Galvan <martin.galvan@tallertechnologies.com>
Message-id: CAOKbPbaLt-LJsAKkQdOE0cs9Xx4OWrUfpDhATXPSdtuNw2xu_A@mail.gmail.com
[PMM: removed an incorrect comment]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For A9, The cache associativity is 4 and the lines size is 32B.
Self identify in CCSIDR accordingly. Cache size remains at 16k.
QEMU doesn't emulate caches, but we should still report the correct
cache-line size to the guest. Some guests (like u-boot) complain if
the cache-line size mismatches a requested flush or invalidate
operation.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 1de6bd40155a1d2f2e93e24b1b1d1d677a432641.1408346233.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Allow each CPU type to specify the value for the debug ID
registers, by putting them in the ARMCPU struct, and use
the resulting information to only expose the correct number
of watchpoint and breakpoint registers for the CPU.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
No functional change.
Prepares for future additions of the EL2 and 3 versions of this reg.
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1402994746-8328-5-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We require to know the PSCI version available to given CPU at
potentially many places. Currently, we need to know PSCI version
when generating DTB for virt machine.
This patch introduce per-CPU 32bit field representing the PSCI
version available to the CPU. The encoding of this 32bit field
is same as described in PSCI v0.2 spec.
Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Anup Patel <anup.patel@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1402901605-24551-8-git-send-email-pranavkumar@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
VFPv4 implies the presence of the half-precision floating point
extension (which is optional in VFPv3). Add this implied rule
to arm_cpu_realizefn() and remove some no-longer-needed explicit
setting of the bit in initfns.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1401458125-27977-5-git-send-email-peter.maydell@linaro.org
CRC and crypto are both optional v8 extensions, so FEATURE_V8
should not imply them. Instead we should set these bits in the
initfns for the 32-bit and 64-bit "cpu any" and for the Cortex-A57.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1401458125-27977-4-git-send-email-peter.maydell@linaro.org
FEATURE_V8 implies both FEATURE_V7MP and FEATURE_ARM_DIV, so
we don't need to set them explicitly in initfns which set the
V8 feature bit.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1401458125-27977-3-git-send-email-peter.maydell@linaro.org
The arm_any_initfn() is used only for the 32-bit linux-user "cpu any",
so it only gets called in builds where TARGET_AARCH64 is not defined.
Remove the unreachable line which sets ARM_FEATURE_AARCH64.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1401458125-27977-2-git-send-email-peter.maydell@linaro.org
Add support for the VMULL.P64 polynomial 64x64 to 128 bit multiplication
instruction in the A32/T32 instruction sets; this is part of the v8
Crypto Extensions.
To do this we have to move the neon_pmull_64_{lo,hi} helpers from
helper-a64.c into neon_helper.c so they can be used by the AArch32
translator.
Inspired-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1401386724-26529-4-git-send-email-peter.maydell@linaro.org
This adds support for the SHA1 and SHA256 instructions that are available
on some v8 implementations of Aarch32.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1401386724-26529-2-git-send-email-peter.maydell@linaro.org
[PMM:
* rebase
* fix bad indent
* add a missing UNDEF check for Q!=1 in the 3-reg SHA1/SHA256 case
* use g_assert_not_reached()
* don't re-extract bit 6 for the 2-reg-misc encodings
* set the ELF HWCAP2 bits for the new features
]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Commit 50a2c6e55f introduced a bug where QEMU would segfault on startup
when using KVM on ARM hosts, because kvm_arm_reset_cpu() accesses
cpu->cpreg_reset_values, which is not allocated before
kvm_arch_init_vcpu(). Fix this by not calling cpu_reset() until after
qemu_init_vcpu().
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Andreas Faerber <afaerber@suse.de>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Message-id: 1401194263-13010-1-git-send-email-christoffer.dall@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Now that we have a CPU object with a reset method, it is better to
keep the KVM reset close to the CPU reset. Using qemu_register_reset
as we do now keeps them far apart.
With this patch, PPC no longer calls the kvm_arch_ function, so
it can get removed there. Other arches call it from their CPU
reset handler, and the function gets an ARMCPU/X86CPU/S390CPU.
Note that ARM- and s390-specific functions are called kvm_arm_*
and kvm_s390_*, while x86-specific functions are called kvm_arch_*.
That follows the convention used by the different architectures.
Changing that is the topic of a separate patch.
Reviewed-by: Gleb Natapov <gnatapov@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The Cortex-A15's CBAR register is actually read-only (unlike that
of the Cortex-A9). Correct our model to match the hardware.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
The Cortex-A57, like most of the other ARM cores, has a CBAR
register which defines the base address of the per-CPU
peripherals. However it has a 64-bit view as well as a
32-bit view; expand the QOM reset-cbar property from UINT32
to UINT64 so this can be specified, and implement the
32-bit and 64-bit views of a 64-bit CBAR.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Implement the AArch64 RVBAR register, which indicates the reset
address. Since the reset address is implementation defined and
usually configurable by setting config signals in hardware, we
also provide a QOM property so it can be set at board level if
necessary.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
For ARMv8 there are two changes to the MVFR media feature registers:
* there is a new MVFR2 which is accessible from 32 bit code
* 64 bit code accesses these via the usual sysreg instructions
rather than with a floating-point specific instruction
Implement this.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Implement AArch64 views of ESR_EL1 and FAR_EL1, and make the 32 bit
DFSR, DFAR, IFAR share state with them as architecturally specified.
The IFSR doesn't share state with any AArch64 register visible at EL1,
so just rename the state field without widening it to 64 bits.
Signed-off-by: Rob Herring <rob.herring@linaro.org>
[PMM: Minor tweaks; fix some bugs involving inconsistencies between
use of offsetof() or offsetoflow32() and struct field width]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
For the A64 instruction set, the only FP/Neon disable trap
is the CPACR FPEN bits, which may indicate "enabled", "disabled"
or "disabled for EL0". Add a bit to the AArch64 tb flags indicating
whether FP/Neon access is currently enabled and make the decoder
emit code to raise exceptions on use of FP/Neon insns if it is not.
We use a new flag in DisasContext rather than borrowing the
existing vfp_enabled flag because the A32/T32 decoder is going
to need both.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
---
I'm aware this is a rather hard to review patch; sorry.
I have done an exhaustive check that we have fp access checks
in all code paths with the aid of the assertions added in the
next patch plus the code-coverage hack patch I posted to the
list earlier.
This patch is correct as of
09e037354 target-arm: A64: Add saturating accumulate ops (USQADD/SUQADD)
which was the last of the Neon insns to be added, so assuming
no refactoring of the code it should be fine.
Currently cpu.h defines a mixture of functions and types needed by
the rest of QEMU and those needed only by files within target-arm/.
Split the latter out into a new header so they aren't needlessly
exposed further than required.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Most targets were using offsetof(CPUFooState, breakpoints) to determine
how much of CPUFooState to clear on reset. Use the next field after
CPU_COMMON instead, if any, or sizeof(CPUFooState) otherwise.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Note that while such functions may exist both for *-user and softmmu,
only *-user uses the CPUState hook, while softmmu reuses the prototype
for calling it directly.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Default to false.
Tidy variable naming and inline cast uses while at it.
Tested-by: Jia Liu <proljc@gmail.com> (or32)
Signed-off-by: Andreas Färber <afaerber@suse.de>
Add support for AArch32 CRC32 and CRC32C instructions added in ARMv8
and add a CPU feature flag to enable these instructions.
The CRC32-C implementation used is the built-in qemu implementation
and The CRC-32 implementation is from zlib. This requires adding zlib
to LIBS to ensure it is linked for the linux-user binary.
Signed-off-by: Will Newton <will.newton@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1393411566-24104-3-git-send-email-will.newton@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
To avoid complication in code that otherwise would not need to
care about whether EL1 is AArch32 or AArch64, we should store
the interrupt mask bits (CPSR.AIF in AArch32 and PSTATE.DAIF
in AArch64) in one place consistently regardless of EL1's mode.
Since AArch64 has an extra enable bit (D for debug exceptions)
which isn't visible in AArch32, this means we need to keep
the enables in env->pstate. (This is also consistent with the
general approach we're taking that we handle 32 bit CPUs as
being like AArch64/ARMv8 CPUs but which only run in 32 bit mode.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Implement all the AArch64 cache invalidate and clean ops
(which are all NOPs since QEMU doesn't emulate the cache).
The only remaining unimplemented cache op is DC ZVA.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Make the cache ID system registers (CLIDR, CSSELR, CCSIDR, CTR)
visible to AArch64. These are mostly simple 64-bit extensions of the
existing 32 bit system registers and so can share reginfo definitions.
CTR needs to have a split definition, but we can clean up the
temporary user-mode implementation in favour of using the CPU-specified
reset value, and implement the system-mode-required semantics of
restricting its EL0 accessibility if SCTLR.UCT is not set.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>