This fixes a compile warning from clang 3.5 (the assertion
could never fire).
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1432881807-18164-2-git-send-email-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: John Snow <jsnow@redhat.com>
[PMM: added note in commit message that this is fixing a build warning]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add support for trapping WFI and WFE instructions to the proper EL when
SCTLR/SCR/HCR settings apply.
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
[PMM: removed unnecessary tweaking of syn_wfx() prototype;
use raise_exception();
don't trap on WFE (and add comment explaining why not);
remove unnecessary ARM_FEATURE checks;
trap to EL3, not EL1, if in S-EL0 and SCTLR check fires]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Just NOP the WFI instruction if we have work to do.
This doesn't make much difference currently (though it does avoid
jumping out to the top level loop and immediately restarting),
but the distinction between "halt" and "don't halt" will become
more important when the decision to halt requires us to trap
to a higher exception level instead.
Suggested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Some coprocessor access functions will need to indicate that the
instruction should trap to EL2 or EL3 rather than the default
target exception level; add corresponding CPAccessResult enum
entries and handling code.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Rather than making every caller of raise_exception set the
syndrome and target EL by hand, make these arguments to
raise_exception() and have that do the job.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Set the exception target EL for MMU faults in tlb_fill.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Move the code which sets exception information out of
arm_cpu_handle_mmu_fault and into tlb_fill. tlb_fill
is the only caller which wants to raise_exception()
so it makes more sense for it to handle the whole of
the exception setup.
As part of this cleanup, move the user-mode-only
implementation function for the handle_mmu_fault CPU
method into cpu.c so we don't need to make it globally
visible, and rename the softmmu-only utility function
arm_cpu_handle_mmu_fault to arm_tlb_fill so it's clear
that it's not the same thing.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
If the SCTLR.UMA trap bit is set then attempts by EL0 to update
the PSTATE DAIF bits via "MSR DAIFSet, imm" and "MSR DAIFClr, imm"
instructions will raise an exception. We were failing to set
the syndrome information for this exception, which meant that
it would be reported as a repeat of whatever the previous
exception was. Set the correct syndrome information.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Updated the various helper routines to set the target EL as needed using a
dedicated function.
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1429722561-12651-3-git-send-email-greg.bellows@linaro.org
[PMM: Also set target_el in fault cases in access_check_cp_reg()]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add a CPU state exception target EL field that will be used for communicating
the EL to which an exception should be routed.
Add a disassembly context field for tracking the EL3 architecture needed for
determining the target exception EL.
Add a target EL argument to the generic exception helper for callers to specify
the EL to which the exception should be routed. Extended the helper to set
the newly added CPU state exception target el.
Added a function for setting the target exception EL and updated calls to helpers
to call it.
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1429722561-12651-2-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Fix a TODO in bp_wp_matches() now that we have a function for
testing whether the CPU is currently in Secure mode or not.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Now that we have memory access attribute information in the watchpoint
checking code, we can correctly implement handling of watchpoints
which should match only on userspace accesses, where LDRT/STRT/LDT/STT
from EL1 are treated as userspace accesses.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Add AArch32 to AArch64 register sychronization functions.
Replace manual register synchronization with new functions in
aarch64_cpu_do_interrupt() and HELPER(exception_return)().
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1423736974-14254-4-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When EL3 is running in AArch32 (or ARMv7 with Security Extensions)
FCSEIDR, CONTEXTIDR, TPIDRURW, TPIDRURO and TPIDRPRW have a secure
and a non-secure instance.
Signed-off-by: Fabian Aggeler <aggelerf@ethz.ch>
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1416242878-876-25-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Implements SMC instruction in AArch32 using the A32 syndrome. When executing
SMC instruction from monitor CPU mode SCR.NS bit is reset.
Signed-off-by: Sergey Fedorov <s.fedorov@samsung.com>
Signed-off-by: Fabian Aggeler <aggelerf@ethz.ch>
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Message-id: 1413910544-20150-7-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Renamed the arm_current_pl CPU function to more accurately represent that it
returns the ARMv8 EL rather than ARMv7 PL.
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1413910544-20150-5-git-send-email-greg.bellows@linaro.org
[PMM: fixed a minor merge resolution error in a couple of hunks]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add support for handling PSCI calls in system emulation. Both version
0.1 and 0.2 of the PSCI spec are supported. Platforms can enable support
by setting the "psci-conduit" QOM property on the cpus to SMC or HVC
emulation and having a PSCI binding in their dtb.
Signed-off-by: Rob Herring <rob.herring@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1412865028-17725-7-git-send-email-peter.maydell@linaro.org
[PMM: made system reset/off PSCI functions power down the CPU so
we obey the PSCI API requirement never to return from them;
rearranged how the code is plumbed into the exception system,
so that we split "is this a valid call?" from "do the call"]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
SMC must UNDEF if EL3 is not implemented; similarly HVC UNDEFs
if EL2 is not implemented. Move the handling of this from
translate-a64.c into the pre_smc and pre_hvc helper functions.
This is necessary because use of these instructions for PSCI
takes precedence over this UNDEF case, and we can't tell if
this is a PSCI call until runtime.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1412865028-17725-5-git-send-email-peter.maydell@linaro.org
At the moment we try to handle c15_cpar with the strategy of:
* emit generated code which makes assumptions about its value
* when the register value changes call tb_flush() to throw
away the now-invalid generated code
This works because XScale CPUs are always uniprocessor, but
it's confusing because it suggests that the same approach can
be taken for other registers. It also means we do a tb_flush()
on CPU reset, which makes multithreaded linux-user binaries
even more likely to fail than would otherwise be the case.
Replace it with a combination of TB flags for the access
checks done on cp0/cp1 for the XScale and iwMMXt instructions,
plus a runtime check for cp2..cp13 coprocessor accesses.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1411056959-23070-1-git-send-email-peter.maydell@linaro.org
Implement handling of breakpoint event firing to correctly
inject the debug exception into the guest.
Since the breakpoint and watchpoint control register format is
very similar we adjust wp_matches() to also handle breakpoints
as well rather than using a separate function.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1410523465-13400-3-git-send-email-peter.maydell@linaro.org
Implement ARMv8 software single-step handling for A64 code:
correctly update the single-step state machine and generate
debug exceptions when stepping A64 code.
This patch has no behavioural change since MDSCR_EL1.SS can't
be set by the guest yet.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Set the PSTATE.SS bit correctly on exception returns from AArch64,
as required by the debug single-step functionality.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
The CPSR has a new-in-v8 execution state bit (IL), and
also some state which has effects in AArch32 but appears
only in the SPSR format (SS) but is RES0 in the CPSR.
Add the IL bit to CPSR_EXEC, and enforce that guest direct
reads and writes to CPSR can't read or write the RES0
bits, so the guest can't get at the SS bit which we store
in uncached_cpsr. This includes not permitting exception
returns to copy reserved bits from an SPSR into CPSR.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
Message-id: 1402994746-8328-3-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Break out code to save/restore AArch64 SP into functions.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
Message-id: 1402994746-8328-2-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This will collect all load and store helpers soon. For now
it is just a replacement for softmmu_exec.h, which this patch
stops including directly, but we also include it where this will
be necessary in order to simplify the next patch.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
They do not need to be in op_helper.c. Because cputlb.c now includes
softmmu_template.h twice for each size, io_readX must be elided the
second time through.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Rather than include helper.h with N values of GEN_HELPER, include a
secondary file that sets up the macros to include helper.h. This
minimizes the files that must be rebuilt when changing the macros
for file N.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Adds support for ERET to and from AArch64 EL2 and 3.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1400980132-25949-20-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1400980132-25949-18-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add aarch64_banked_spsr_index(), used to map an Exception Level
to an index in the banked_spsr array.
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1400980132-25949-13-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
No functional change.
Prepares for future additions of the EL2 and 3 versions of this reg.
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1400980132-25949-7-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Implement exception handling for AArch64 EL1. Exceptions from AArch64 or
AArch32 EL0 are supported.
Signed-off-by: Rob Herring <rob.herring@linaro.org>
[PMM: fixed minor style nits; updated to match changes in
previous patches; added some of the simpler cases of
illegal-exception-return support]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Implement handling for the AArch64 SP_EL0 system register.
This holds the EL0 stack pointer, and is only accessible when
it's not being used as the stack pointer, ie when we're in EL1
and EL1 is using its own stack pointer. We also provide a
definition of the SP_EL1 register; this isn't guest visible
as a system register for an implementation like QEMU which
doesn't provide EL2 or EL3; however it is useful for ensuring
the underlying state is migrated.
We need to update the state fields in the CPU state whenever
we switch stack pointers; this happens when we take an exception
and also when SPSEL is used to change the bit in PSTATE which
indicates which stack pointer EL1 should use.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Add new helpers exception_with_syndrome (for generating an exception
with syndrome information) and exception_uncategorized (for generating
an exception with "Unknown or Uncategorized Reason", which have a syndrome
register value of zero), and use them to generate the correct syndrome
information for exceptions which are raised directly from generated code.
This patch includes moving the A32/T32 gen_exception_insn functions
further up in the source file; they will be needed for "VFP/Neon disabled"
exception generation later.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
For exceptions taken to AArch64, if a coprocessor/system register
access fails due to a trap or enable bit then the syndrome information
must include details of the failing instruction (crn/crm/opc1/opc2
fields, etc). Make the decoder construct the syndrome information
at translate time so it can be passed at runtime to the access-check
helper function and used as required.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Currently cpu.h defines a mixture of functions and types needed by
the rest of QEMU and those needed only by files within target-arm/.
Split the latter out into a new header so they aren't needlessly
exposed further than required.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Note that while such functions may exist both for *-user and softmmu,
only *-user uses the CPUState hook, while softmmu reuses the prototype
for calling it directly.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Implement WFE to yield our timeslice to the next CPU.
This avoids slowdowns in multicore configurations caused
by one core busy-waiting on a spinlock which can't possibly
be unlocked until the other core has an opportunity to run.
This speeds up my test case A15 dual-core boot by a factor
of three (though it is still four or five times slower than
a single-core boot).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1393339545-22111-1-git-send-email-peter.maydell@linaro.org
Reviewed-by: Richard Henderson <rth@twiddle.net>
Tested-by: Rob Herring <rob.herring@linaro.org>
Implement the MSR (immediate) instructions, which can update the
PSTATE SP and DAIF fields.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
All cpreg read and write functions now return 0, so we can clean up
their prototypes:
* write functions return void
* read functions return the value rather than taking a pointer
to write the value to
This is a fairly mechanical change which makes only the bare
minimum set of changes to the callers of read and write functions.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>