Not all exception types update both FAR and ESR.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1411718914-6608-7-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1411718914-6608-6-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1411718914-6608-5-git-send-email-edgar.iglesias@gmail.com
[PMM: updated to account for recent cpu-exec refactoring]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Introduce new_el and new_mode in preparation for future patches
that add support for taking exceptions to and from EL2 and 3.
No functional change.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1411718914-6608-4-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1411718914-6608-3-git-send-email-edgar.iglesias@gmail.com
[PMM: apply offsetoflow32() to correct regdef]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1411718914-6608-2-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
At the moment we try to handle c15_cpar with the strategy of:
* emit generated code which makes assumptions about its value
* when the register value changes call tb_flush() to throw
away the now-invalid generated code
This works because XScale CPUs are always uniprocessor, but
it's confusing because it suggests that the same approach can
be taken for other registers. It also means we do a tb_flush()
on CPU reset, which makes multithreaded linux-user binaries
even more likely to fail than would otherwise be the case.
Replace it with a combination of TB flags for the access
checks done on cp0/cp1 for the XScale and iwMMXt instructions,
plus a runtime check for cp2..cp13 coprocessor accesses.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1411056959-23070-1-git-send-email-peter.maydell@linaro.org
Implement handling of breakpoint event firing to correctly
inject the debug exception into the guest.
Since the breakpoint and watchpoint control register format is
very similar we adjust wp_matches() to also handle breakpoints
as well rather than using a separate function.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1410523465-13400-3-git-send-email-peter.maydell@linaro.org
This patch adds support for setting guest breakpoints
based on values the guest writes to the DBGBVR and DBGBCR
registers. (It doesn't include the code to handle when
these breakpoints fire, so has no guest-visible effect.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1410523465-13400-2-git-send-email-peter.maydell@linaro.org
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 1410626734-3804-15-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The ARM architecture defines that the "IS" variants of TLB
maintenance operations must affect all TLBs in the Inner Shareable
domain, which for us means all CPUs. We were incorrectly implementing
these to only affect the current CPU, which meant that SMP TCG
operation was unstable.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1410274883-9578-3-git-send-email-peter.maydell@linaro.org
Cc: qemu-stable@nongnu.org
When we implemented ARMv8 in QEMU we retained our legacy loose
wildcarded decoding of the TLB maintenance operations for v7
and earlier CPUs and provided the correct stricter decode for
v8. However the loose decode is in fact wrong for v7MP, because
it doesn't correctly implement the operations which must apply
to every CPU in the Inner Shareable domain.
Move the legacy wildcarding from the not_v8 reginfo array
into the not_v7 array, and move the strictly decoded operations
from the v8 reginfo to v7 or v7mp arrays as appropriate.
Cache and TLB lockdown legacy wildcarding remains in the
not_v8 array for the moment.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1410274883-9578-2-git-send-email-peter.maydell@linaro.org
Cc: qemu-stable@nongnu.org
Implement debug registers DBGVCR, OSDLR_EL1 and MDCCSR_EL0
(as dummy or limited-functionality). 32 bit Linux kernels will
access these at startup so they are required for breakpoints
and watchpoints to be supported.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
MDSCR_EL1 has actual functionality now; remove the out of date
comment that claims it is a dummy implementation.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For debug exceptions taken to AArch32 we have to set the
DBGDSCR.MOE (Method Of Entry) bits; we can identify the
kind of debug exception from the information in
exception.syndrome.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move the utility function extended_addresses_enabled() into
internals.h; we're going to need to call it from op_helper.c.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Implement support for setting QEMU watchpoints based on the
values the guest writes to the ARM architected watchpoint
registers. (We do not yet report the firing of the watchpoints
to the guest, so they will just be ignored.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Fix a single misindented line in arm_cpu_reset().
Signed-off-by: Martin Galvan <martin.galvan@tallertechnologies.com>
[PMM: split this out from the previous commit]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When calling qemu_system_reset after startup on a Cortex-M
CPU, the initial values of PC, MSP and the Thumb bit weren't being set
correctly if the vector table was in ROM. In particular, since Thumb was 0, a
Usage Fault would arise immediately after trying to execute any instruction
on a Cortex-M.
Signed-off-by: Martin Galvan <martin.galvan@tallertechnologies.com>
Message-id: CAOKbPbaLt-LJsAKkQdOE0cs9Xx4OWrUfpDhATXPSdtuNw2xu_A@mail.gmail.com
[PMM: removed an incorrect comment]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is the function that is called when writing to the
PMCCFILTR_EL0 register
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 73da3da6404855b17d5ae82975a32ff3a4dcae3d.1409025949.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the old PMCCNTR code and replace it with calls to the new
pmccntr_sync() and arm_ccnt_enabled() functions.
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 693a6e437d915c2195fd3dc7303f384ca538b7bf.1409025949.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is used to synchronise the PMCCNTR counter and swap its
state between enabled and disabled if required. It must always
be called twice, both before and after any logic that could
change the state of the PMCCNTR counter.
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 62811d4c0f7b1384f7aab62ea2fcfda3dcb0db50.1409025949.git.peter.crosthwaite@xilinx.com
[PMM: fixed minor typos in pmccntr_sync doc comment]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Include a helper function to determine if the CCNT counter
is enabled.
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: e1a64f17a756e06c8bda8238ad4826d705049f7a.1409025949.git.peter.crosthwaite@xilinx.com
[ PC changes
* Remove EL based checks
]
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This patch adds support for the ARMv8 version of the PMCCNTR and
related registers. It also starts to implement the PMCCFILTR_EL0
register.
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: b5d1094764a5416363ee95216799b394ecd011e8.1409025949.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The register is now 64bit, however a 32 bit write to the register
should leave the higher bits unchanged. The open coded write handler
does not implement this, so we need to read-modify-write accordingly.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Reviewed-by: Alistair Francis <alistair23@gmail.com>
Message-id: ec350573424bb2adc1701c3b9278d26598e2f2d1.1409025949.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This makes the PMCCNTR register 64-bit to allow for the
64-bit ARMv8 version.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Message-id: 6c5bac5fd0ea54963b1fc0e7f9464909f2e19a73.1409025949.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We implement the crypto extensions but were incorrectly reporting
ID register values for the Cortex-A57 which did not advertise
crypto. Use the correct values as described in the TRM.
With this fix Linux correctly detects presence of the crypto
features and advertises them in /proc/cpuinfo.
Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1408718660-7295-1-git-send-email-peter.maydell@linaro.org
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Commit 2c7ffc414 added support for honouring the CPACR coprocessor
access control register bits which may disable access to VFP
and Neon instructions. However it failed to account for the
fact that the CPACR is only present starting from the ARMv6
architecture version, so it accidentally disabled VFP completely
for ARMv5 CPUs like the ARM926. Linux would detect this as
"no VFP present" and probably fall back to its own emulation,
but other guest OSes might crash or misbehave.
This fixes bug LP:1359930.
Reported-by: Jakub Jermar <jakub@jermar.eu>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1408714940-7192-1-git-send-email-peter.maydell@linaro.org
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For A9, The cache associativity is 4 and the lines size is 32B.
Self identify in CCSIDR accordingly. Cache size remains at 16k.
QEMU doesn't emulate caches, but we should still report the correct
cache-line size to the guest. Some guests (like u-boot) complain if
the cache-line size mismatches a requested flush or invalidate
operation.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 1de6bd40155a1d2f2e93e24b1b1d1d677a432641.1408346233.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The current code supplies the PSCI v0.1 function IDs in the DT even when
KVM uses PSCI v0.2.
This will break guest kernels that only support PSCI v0.1 as they will
use the IDs provided in the DT. Guest kernels with PSCI v0.2 support
are not affected by this patch, because they ignore the function IDs in
the device tree and rely on the architecture definition.
Define QEMU versions of the constants and check that they correspond to
the Linux defines on Linux build hosts. After this patch, both guest
kernels with PSCI v0.1 support and guest kernels with PSCI v0.2 should
work.
Tested on TC2 for 32-bit and APM Mustang for 64-bit (aarch64 guest
only). Both cases tested with 3.14 and linus/master and verified I
could bring up 2 cpus with both guest kernels. Also tested 32-bit with
a 3.14 host kernel with only PSCI v0.1 and both guests booted here as
well.
Cc: qemu-stable@nongnu.org
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The function IDs for PSCI v0.1 are exported by KVM and defined as
KVM_PSCI_FN_<something>. To build using these defines in non-KVM code,
QEMU defines these IDs locally and check their correctness against the
KVM headers when those are available.
However, the naming scheme used for QEMU (almost) clashes with the PSCI
v0.2 definitions from Linux so to avoid unfortunate naming when we
introduce local PSCI v0.2 defines, rename the current local defines with
QEMU_ prependend and clearly identify the PSCI version as v0.1 in the
defines.
Cc: qemu-stable@nongnu.org
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Now that all the new code to support single-stepping is in
place, wire up the guest-visible MDSCR_EL1, so the guest
can enable single-stepping.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
ARMv8 single-stepping requires the exception level that controls
the single-stepping to be in AArch64 execution state, but the
code being stepped may be in AArch64 or AArch32. Implement the
necessary support code for single-stepping AArch32 code.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Implement ARMv8 software single-step handling for A64 code:
correctly update the single-step state machine and generate
debug exceptions when stepping A64 code.
This patch has no behavioural change since MDSCR_EL1.SS can't
be set by the guest yet.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
If gen_goto_tb() decides not to link the two TBs, then the
fallback path generates unnecessary code:
* if singlestep is enabled then we generate unreachable code
after the gen_exception_internal(EXCP_DEBUG)
* if singlestep is disabled then we will generate exit_tb(0)
twice, once in gen_goto_tb() and once coming out of the
main loop with is_jmp set to DISAS_JUMP
Correct these deficiencies by only emitting exit_tb() in the
non-singlestep case, in which case we can use DISAS_TB_JUMP
to suppress the main-loop exit_tb().
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Set the PSTATE.SS bit correctly on exception returns from AArch64,
as required by the debug single-step functionality.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
When an exception is taken to AArch32, we must clear the PSTATE.SS
bit for the exception handler, and must also ensure that the SS bit
is not set in the value saved to SPSR_<mode>. Achieve both of these
aims by clearing the bit in uncached_cpsr before saving it to the SPSR.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
The CPSR has a new-in-v8 execution state bit (IL), and
also some state which has effects in AArch32 but appears
only in the SPSR format (SS) but is RES0 in the CPSR.
Add the IL bit to CPSR_EXEC, and enforce that guest direct
reads and writes to CPSR can't read or write the RES0
bits, so the guest can't get at the SS bit which we store
in uncached_cpsr. This includes not permitting exception
returns to copy reserved bits from an SPSR into CPSR.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Allow each CPU type to specify the value for the debug ID
registers, by putting them in the ARMCPU struct, and use
the resulting information to only expose the correct number
of watchpoint and breakpoint registers for the CPU.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Bring the 32 bit and 64 bit views of the debug registers into
line by providing the same set of registers in both cases.
(This still isn't a complete set, but it is consistent.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Currently the STATE_BOTH shorthand for allowing a single reginfo struct
to define handling for both AArch32 and AArch64 views of a register
only permits this where the AArch32 view is in cp15. It turns out that
the debug registers in cp14 also have neatly lined up encodings;
allow these also to share reginfo structs by permitting a STATE_BOTH
reginfo to specify the .cp field (and continue to default to 15 if
it is not specified).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
At the moment we have a mixed set of mostly dummy register
definitions for various debug related registers which have
been added piecemeal in order to get Linux kernels to boot.
In preparation for actually implementing debug support,
bring them all together into one place.
This commit doesn't change behaviour: we still expose
exactly the same registers and behaviour to the guest
in all configurations.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
When we take an exception resulting from a BRK instruction,
the architecture requires that the "preferred return address"
reported to the exception handler is the address of the BRK
itself, not the following instruction (like undefined
insns, and in contrast with SVC, HVC and SMC). Follow this,
rather than incorrectly reporting the address of the following
insn.
(We do get this correct for the A32/T32 BKPT insns.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-stable@nongnu.org
According to the ARM ARM we weren't correctly flushing the TLB entries
where bits 63:56 didn't match bit 55 of the virtual address. This
exposed a problem when we switched QEMU's internal TARGET_PAGE_BITS to
12 for aarch64.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1406733627-24255-3-git-send-email-alex.bennee@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Otherwise we break quickly when we change TARGET_PAGE_SIZE.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1406733627-24255-2-git-send-email-alex.bennee@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Static code analyzers complain about a dubious & operation used for a
boolean value. The code does not test the PSTATE_SP bit as it should.
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Message-id: 1406359601-25583-1-git-send-email-sw@weilnetz.de
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1402994746-8328-7-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1402994746-8328-6-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
No functional change.
Prepares for future additions of the EL2 and 3 versions of this reg.
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1402994746-8328-5-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
Message-id: 1402994746-8328-4-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
Message-id: 1402994746-8328-3-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Break out code to save/restore AArch64 SP into functions.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
Message-id: 1402994746-8328-2-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Implement kvm_arm_vcpu_init() as a simple call to arm_arm_vcpu_init()
(which uses the KVM_ARM_VCPU_INIT vcpu ioctl to tell the kernel
to re-initialize the vCPU), rather than via the complicated code
which saves a copy of the register state on first init and then
writes it back to the kernel. This is much simpler and brings the
32-bit KVM code into line with the 64-bit code.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1403802973-20841-1-git-send-email-peter.maydell@linaro.org
We require to know the PSCI version available to given CPU at
potentially many places. Currently, we need to know PSCI version
when generating DTB for virt machine.
This patch introduce per-CPU 32bit field representing the PSCI
version available to the CPU. The encoding of this 32bit field
is same as described in PSCI v0.2 spec.
Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Anup Patel <anup.patel@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1402901605-24551-8-git-send-email-pranavkumar@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
To implement kvm_arch_reset_vcpu(), we simply re-init the VCPU
using kvm_arm_vcpu_init() so that all registers of VCPU are set
to their reset values by in-kernel KVM code.
Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Anup Patel <anup.patel@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1402901605-24551-7-git-send-email-pranavkumar@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Latest linux kernel supports in-kernel emulation of PSCI v0.2 but
to enable it we need to select KVM_ARM_VCPU_PSCI_0_2 feature using
KVM_ARM_VCPU_INIT ioctl.
Also, we can use KVM_ARM_VCPU_PSCI_0_2 feature for VCPU only when
linux kernel has KVM_CAP_ARM_PSCI_0_2 capability.
This patch updates kvm_arch_init_vcpu() to enable KVM_ARM_VCPU_PSCI_0_2
feature for VCPU when KVM ARM/ARM64 has KVM_CAP_ARM_PSCI_0_2 capability.
Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Anup Patel <anup.patel@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1402901605-24551-6-git-send-email-pranavkumar@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Introduce a common kvm_arm_vcpu_init() for doing KVM_ARM_VCPU_INIT
ioctl in KVM ARM and KVM ARM64. This also helps us factor-out few
common code lines from kvm_arch_init_vcpu() for KVM ARM/ARM64.
Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Anup Patel <anup.patel@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1402901605-24551-5-git-send-email-pranavkumar@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In handle_simd_shift_fpint_conv(), the combination of is_double == true,
is_scalar == false and is_q == false is an unallocated encoding; the
'both parts false' case of the nested ?: expression for calculating
maxpass is therefore unreachable and can be removed.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 1402171881-14343-4-git-send-email-peter.maydell@linaro.org
In disas_simd_3same_int(), none of the instructions permit is_q
to be false with size == 3 (this would be a vector operation with
a one-element vector, and the instruction set encodes those as
scalar operations). Replace the always-true ?: check with an
assert.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 1402171881-14343-3-git-send-email-peter.maydell@linaro.org
The maximum block size for AArch64 address translation is 2GB. This means
that we need a ULL suffix on our shift to avoid shifting into the sign
bit of a signed 32 bit integer.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 1402171881-14343-2-git-send-email-peter.maydell@linaro.org
Corrected handling of writes to TTBCR for ARMv8 (previously UNK/SBZP
bits are not RES0) and ARMv7 (new bits PD0/PD1 for CPUs with Security
Extensions).
Bits PD0/PD1 are now respected in get_phys_addr_v6/v5() and
get_level1_table_address.
Signed-off-by: Fabian Aggeler <aggelerf@ethz.ch>
Message-id: 1402409556-18574-1-git-send-email-aggelerf@ethz.ch
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This patch refactors the ARM cryptographic instructions to use the
(newly) added common tables from include/qemu/aes.h.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
The iwmmxt_msadb helper and its corresponding gen function are unused;
delete them. (This function appears to have never been used right back
to the initial implementation of iwMMXt; it is identical to iwmmxt_madduq,
and is presumably an accidental remnant from the initial development.)
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1401822125-1822-1-git-send-email-peter.maydell@linaro.org
The code for handling writes to the generic timer control registers
had several bugs:
* ISTATUS (bit 2) is read-only but we forced it to zero on any write
* the check for "was IMASK (bit 1) toggled?" incorrectly used '&' where
it should be '^'
* the handling of IMASK was inverted: we should set the IRQ if
ISTATUS is set and IMASK is clear, not if both are set
The combination of these bugs meant that when running a Linux guest
that uses the generic timers we would fairly quickly end up either
forgetting that the timer output should be asserted, or failing to
set the IRQ when the timer was unmasked. The result is that the guest
never gets any more timer interrupts.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1401803208-1281-1-git-send-email-peter.maydell@linaro.org
Cc: qemu-stable@nongnu.org
Bring the 32-bit CRC helper functions into line with the A64 ones,
by masking the high bytes of the value in the calling code rather
than the helper. This is more efficient since we can determine the
mask at translation time.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1401458125-27977-7-git-send-email-peter.maydell@linaro.org
VFPv4 implies the presence of the half-precision floating point
extension (which is optional in VFPv3). Add this implied rule
to arm_cpu_realizefn() and remove some no-longer-needed explicit
setting of the bit in initfns.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1401458125-27977-5-git-send-email-peter.maydell@linaro.org
CRC and crypto are both optional v8 extensions, so FEATURE_V8
should not imply them. Instead we should set these bits in the
initfns for the 32-bit and 64-bit "cpu any" and for the Cortex-A57.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1401458125-27977-4-git-send-email-peter.maydell@linaro.org
FEATURE_V8 implies both FEATURE_V7MP and FEATURE_ARM_DIV, so
we don't need to set them explicitly in initfns which set the
V8 feature bit.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1401458125-27977-3-git-send-email-peter.maydell@linaro.org
The arm_any_initfn() is used only for the 32-bit linux-user "cpu any",
so it only gets called in builds where TARGET_AARCH64 is not defined.
Remove the unreachable line which sets ARM_FEATURE_AARCH64.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1401458125-27977-2-git-send-email-peter.maydell@linaro.org
Now that we have a separate ARM_FEATURE_V8_PMULL bit, use it for
the A64 PMULL, not the AES feature bit.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add support for the VMULL.P64 polynomial 64x64 to 128 bit multiplication
instruction in the A32/T32 instruction sets; this is part of the v8
Crypto Extensions.
To do this we have to move the neon_pmull_64_{lo,hi} helpers from
helper-a64.c into neon_helper.c so they can be used by the AArch32
translator.
Inspired-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1401386724-26529-4-git-send-email-peter.maydell@linaro.org
The current undefreq field in the neon_3reg_wide handling allows us
to encode "UNDEF if size != 0" and "UNDEF if size == 0". This is
no longer sufficient with the advent of 64-bit polynomial VMULL,
which means we want to UNDEF if size == 1. Change the undefreq
encoding to use separate bits for all of "UNDEF if size == 0",
"UNDEF if size == 1" and "UNDEF if size == 2".
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1401386724-26529-3-git-send-email-peter.maydell@linaro.org
This adds support for the SHA1 and SHA256 instructions that are available
on some v8 implementations of Aarch32.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1401386724-26529-2-git-send-email-peter.maydell@linaro.org
[PMM:
* rebase
* fix bad indent
* add a missing UNDEF check for Q!=1 in the 3-reg SHA1/SHA256 case
* use g_assert_not_reached()
* don't re-extract bit 6 for the 2-reg-misc encodings
* set the ELF HWCAP2 bits for the new features
]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In v8 page tables bit 54 in the PTE is UXN in the EL0/EL1 translation regimes
and XN elsewhere. In v7 the bit is always XN. Since we only emulate EL0/EL1 we
can just treat this bit as UXN whenever we are in v8 mode.
Also correctly extract the upper attributes from the PTE entry, the v8 version
tried to avoid extracting the CONTIG bit and ended up with the upper bits being
off-by-one. Instead behave the same as v7 and extract (but ignore) the CONTIG
bit.
This fixes "Bad mode in Synchronous Abort handler detected, code 0x8400000f"
seen when modprobing modules under Linux.
Signed-off-by: Ian Campbell <ijc@hellion.org.uk>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Claudio Fontana <claudio.fontana@huawei.com>
Cc: Rob Herring <robherring2@gmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This patch changes some readfns/writefns to use raw_write
and raw_read functions, which use the fieldoffset specified
in ARMCPRegInfo instead of directly accessing the field.
This will simplify patches for EL3 & Security Extensions.
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Fabian Aggeler <aggelerf@ethz.ch>
Message-id: 1401962428-14749-1-git-send-email-aggelerf@ethz.ch
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
cpu64.c contains a reginfo list for the impdef registers on
the Cortex-A57; however we forgot to actually call define_arm_cp_regs(),
so it was sitting there doing nothing. Remedy this omission.
Message-id: 1401226259-23121-1-git-send-email-peter.maydell@linaro.org
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Tested-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This will collect all load and store helpers soon. For now
it is just a replacement for softmmu_exec.h, which this patch
stops including directly, but we also include it where this will
be necessary in order to simplify the next patch.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
These will soon require cpu_ldst.h, so move them out of cpu.h.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
They do not need to be in op_helper.c. Because cputlb.c now includes
softmmu_template.h twice for each size, io_readX must be elided the
second time through.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Rather than include helper.h with N values of GEN_HELPER, include a
secondary file that sets up the macros to include helper.h. This
minimizes the files that must be rebuilt when changing the macros
for file N.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1400980132-25949-24-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1400980132-25949-23-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1400980132-25949-22-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1400980132-25949-21-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Adds support for ERET to and from AArch64 EL2 and 3.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1400980132-25949-20-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1400980132-25949-19-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1400980132-25949-18-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1400980132-25949-17-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1400980132-25949-16-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1400980132-25949-15-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1400980132-25949-14-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add aarch64_banked_spsr_index(), used to map an Exception Level
to an index in the banked_spsr array.
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1400980132-25949-13-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1400980132-25949-12-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1400980132-25949-11-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1400980132-25949-10-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
No functional change.
Preparation for adding EL2 and 3 versions of this reg.
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1400980132-25949-9-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
No functional change.
Prepares for future addtion of EL2 and 3 versions of this reg.
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1400980132-25949-8-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
No functional change.
Prepares for future additions of the EL2 and 3 versions of this reg.
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1400980132-25949-7-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Avoid using IS_USER directly as the MMU-idx to simplify future
changes to the MMU layout.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1400980132-25949-5-git-send-email-edgar.iglesias@gmail.com
Message-id: 1400805738-11889-6-git-send-email-edgar.iglesias@gmail.com
[PMM: parts relating to LDRT/STRT moved into earlier patches]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The SRS instruction was using a hardcoded 0 for the memory
accesses. This happens to be OK since the SRS instruction is
UNPREDICTABLE in User and System modes, but is awkward if we
want to rearrange the MMU index uses. Switch to using
get_mem_index() like all the other accesses.
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1400980132-25949-4-git-send-email-edgar.iglesias@gmail.com
Clean up the mmu index handling for ldrt/strt insns: instead
of a flag 'user' indicating whether to treat the store as user
mode or not, use 'memidx' to indicate the correct memory index to use.
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1400980132-25949-3-git-send-email-edgar.iglesias@gmail.com
In ARMv7 the CPACR register allows to control access rights to
coprocessor 0-13 interfaces. Bits corresponding to unimplemented
coprocessors should be RAZ/WI. Bits ASEDIS, D32DIS, TRCDIS are
UNK/SBZP if VFP is not implemented and RAO/WI in some cases.
Treating TRCDIS as RAZ/WI since we neither implement a trace
macrocell nor a CP14 interface to the trace macrocell registers.
Since CPACR bits for VFP/Neon access are honoured with the CPACR_FPEN
bit in the TB flags, flushing the TLB is not necessary anymore.
Signed-off-by: Fabian Aggeler <aggelerf@ethz.ch>
Message-id: 1400532968-30668-1-git-send-email-aggelerf@ethz.ch
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Commit 50a2c6e55f introduced a bug where QEMU would segfault on startup
when using KVM on ARM hosts, because kvm_arm_reset_cpu() accesses
cpu->cpreg_reset_values, which is not allocated before
kvm_arch_init_vcpu(). Fix this by not calling cpu_reset() until after
qemu_init_vcpu().
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Andreas Faerber <afaerber@suse.de>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Message-id: 1401194263-13010-1-git-send-email-christoffer.dall@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Linux makes a habit of writing the same value to the SCTLR that it
already holds. In a sample boot of the kernel to a shell prompt
it wrote the SCTLR with the value it already held 325465 times,
and wrote different values just 3 times.
Skip flushing the TLB if the SCTLR value isn't actually being changed;
this speeds up my sample boot by 3-5%.
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1399560029-19007-1-git-send-email-peter.maydell@linaro.org
After commit 767adce2d, they are redundant. This way we don't assign them
except when needed. Once there, there were lots of cases where the ".fields"
indentation was wrong:
.fields = (VMStateField []) {
and
.fields = (VMStateField []) {
Change all the combinations to:
.fields = (VMStateField[]){
The biggest problem (apart from aesthetics) was that checkpatch complained
when we copy&pasted the code from one place to another.
Signed-off-by: Juan Quintela <quintela@redhat.com>
[PMM: fixed minor conflict, corrected commit message typos]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Now that we have a CPU object with a reset method, it is better to
keep the KVM reset close to the CPU reset. Using qemu_register_reset
as we do now keeps them far apart.
With this patch, PPC no longer calls the kvm_arch_ function, so
it can get removed there. Other arches call it from their CPU
reset handler, and the function gets an ARMCPU/X86CPU/S390CPU.
Note that ARM- and s390-specific functions are called kvm_arm_*
and kvm_s390_*, while x86-specific functions are called kvm_arch_*.
That follows the convention used by the different architectures.
Changing that is the topic of a separate patch.
Reviewed-by: Gleb Natapov <gnatapov@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
As the macro verifies the value is positive, rename it
to make the function clearer.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Harmless typo as opc1 defaults to zero and opc2 gets
re-declared to its correct value.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1398926097-28097-4-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For linked branches, updates to the link register happen
conceptually after the read of the branch target register.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Cc: qemu-stable@nongnu.org
Message-id: 1398926097-28097-3-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1398926097-28097-2-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Like was done for AArch32 for WFE, implement both WFE and YIELD as a
yield operation. This speeds up multi-core system emulation.
Signed-off-by: Rob Herring <rob.herring@linaro.org>
Message-id: 1397588401-20366-1-git-send-email-robherring2@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
XScale defines some implementation-specific coprocessor registers
for doing cache lockdown operations. Since QEMU doesn't model a
cache no proper implementation is possible, but NOP out the
registers so that guest code like u-boot that tries to use them
doesn't crash.
Reported-by: <prqek@centrum.cz>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The test for the U bit was incorrectly inverted in the scalar case of SQXTUN.
This doesn't affect the vector case as the U bit is used to select XTN(2).
Reported-by: Hao Liu <hao.liu@arm.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The smlald (and probably smlsld) instruction was doing incorrect sign
extensions of the operands amongst 64bit result calculation. The
instruction psuedo-code is:
operand2 = if m_swap then ROR(R[m],16) else R[m];
product1 = SInt(R[n]<15:0>) * SInt(operand2<15:0>);
product2 = SInt(R[n]<31:16>) * SInt(operand2<31:16>);
result = product1 + product2 + SInt(R[dHi]:R[dLo]);
R[dHi] = result<63:32>;
R[dLo] = result<31:0>;
The result calculation should be done in 64 bit arithmetic, and hence
product1 and product2 should be sign extended to 64b before calculation.
The current implementation was adding product1 and product2 together
then sign-extending the intermediate result leading to false negatives.
E.G. if product1 = product2 = 0x4000000, their sum = 0x80000000, which
will be incorrectly interpreted as -ve on sign extension.
We fix by doing the 64b extensions on both product1 and product2 before
any addition/subtraction happens.
We also fix where we were possibly incorrectly setting the Q saturation
flag for SMLSLD, which the ARM ARM specifically says is not set.
Reported-by: Christina Smith <christina.smith@xilinx.com>
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 2cddb6f5a15be4ab8d2160f3499d128ae93d304d.1397704570.git.peter.crosthwaite@xilinx.com
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Clean up useless 'break' statement after 'return' statement.
Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For system mode, we may have a 64 bit CPU which is currently executing
in AArch32 state; if we're dumping CPU state to the logs we should
therefore show the correct state for the current execution state,
rather than hardwiring it based on the type of the CPU. For consistency
with how we handle translation, we leave the 32 bit dump function
as the default, and have it hand off control to the 64 bit dump code
if we're in AArch64 mode.
Reported-by: Rob Herring <rob.herring@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The AArch64 implementation of the set_pc method needs to be updated to
handle the possibility that the CPU is in AArch32 mode; otherwise there
are weird crashes when doing interprocessing in system emulation mode
when an interrupt occurs and we fail to resynchronize the 32-bit PC
with the TB we need to execute next.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
The Cortex-A15's CBAR register is actually read-only (unlike that
of the Cortex-A9). Correct our model to match the hardware.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
The Cortex-A57, like most of the other ARM cores, has a CBAR
register which defines the base address of the per-CPU
peripherals. However it has a 64-bit view as well as a
32-bit view; expand the QOM reset-cbar property from UINT32
to UINT64 so this can be specified, and implement the
32-bit and 64-bit views of a 64-bit CBAR.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Implement a subset of the Cortex-A57's implementation defined system
registers. We provide RAZ/WI or reads-as-constant/writes-ignored
implementations of the various control and syndrome reigsters.
We do not implement registers which provide direct access to and
manipulation of the L1 cache, since QEMU doesn't implement caches.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Implement the AArch64 RVBAR register, which indicates the reset
address. Since the reset address is implementation defined and
usually configurable by setting config signals in hardware, we
also provide a QOM property so it can be set at board level if
necessary.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Implement the AArch64 address translation operations.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Implement the auxiliary fault status registers AFSR0_EL1 and
AFSR1_EL1. These are present on v7 and later, and have IMPDEF
behaviour; we choose to RAZ/WI for all cores.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Many of the reginfo definitions in cp_reginfo[] use CP_ANY wildcards.
This is for a combination of reasons:
* early ARM implementations really did underdecode
* earlier versions of QEMU underdecoded and we can't tighten
this up because we don't know if guests really require this or not
* implementation convenience
For ARMv8 the architecture has tightened things up and system and
coprocessor registers are always specifically decoded. We take
advantage of this opportunity for a clean break by restricting
our CP_ANY wildcarded reginfo to pre-v8 CPUs, and providing
specifically decoded versions where necessary for v8 CPUs.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
In ARMv8 the 32 bit coprocessor ID register space is tidied up to
remove the wildcarded aliases of the MIDR and the RAZ behaviour
for the unassigned space where crm = 3..7. Make sure we don't
expose thes wildcards for v8 cores. This means we need to have
a specific implementation for REVIDR, an IMPDEF register which
may be the same as the MIDR (and which we always implement as such).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
The AArch64 usermode 'any' CPU type was accidentally specified
with the ARM_FEATURE_THUMB2EE bit set. This is incorrect since
ARMv8 removes Thumb2EE completely. Since we never implemented
Thumb2EE anyway having the feature bit set was fairly harmless
for user-mode, but the correct thing is to not set it at all.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Implement the ISR_EL1 register. This is actually present in
ARMv7 as well but was previously unimplemented. It is a
read-only register that indicates whether interrupts are
currently pending.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Implement the AArch64 view of the ACTLR (auxiliary control
register). Note that QEMU internally tends to call this
AUXCR for historical reasons.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Implement AArch64 view of the CONTEXTIDR register.
We tighten up the condition when we flush the TLB on a CONTEXTIDR
write to avoid needlessly flushing the TLB every time on a 64
bit system (and also on a 32 bit system using LPAE, as a bonus).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
All the AArch32 ID registers are visible from AArch64
(in addition to the AArch64-specific ID_AA64* registers).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
For ARMv8 there are two changes to the MVFR media feature registers:
* there is a new MVFR2 which is accessible from 32 bit code
* 64 bit code accesses these via the usual sysreg instructions
rather than with a floating-point specific instruction
Implement this.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Implement exception handling for AArch64 EL1. Exceptions from AArch64 or
AArch32 EL0 are supported.
Signed-off-by: Rob Herring <rob.herring@linaro.org>
[PMM: fixed minor style nits; updated to match changes in
previous patches; added some of the simpler cases of
illegal-exception-return support]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Move arm_log_exception() into internals.h so we can use it from
helper-a64.c for the AArch64 exception entry code.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Implement the AArch64 SPSR_EL1. For compatibility with how KVM
handles SPSRs and with the architectural mapping between AArch32
and AArch64, we put this in the banked_spsr[] array in the slot
that is used for SVC in AArch32. This means we need to extend the
array from uint32_t to uint64_t, which requires some reworking
of the 32 bit KVM save/restore code.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Implement handling for the AArch64 SP_EL0 system register.
This holds the EL0 stack pointer, and is only accessible when
it's not being used as the stack pointer, ie when we're in EL1
and EL1 is using its own stack pointer. We also provide a
definition of the SP_EL1 register; this isn't guest visible
as a system register for an implementation like QEMU which
doesn't provide EL2 or EL3; however it is useful for ensuring
the underlying state is migrated.
We need to update the state fields in the CPU state whenever
we switch stack pointers; this happens when we take an exception
and also when SPSEL is used to change the bit in PSTATE which
indicates which stack pointer EL1 should use.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>