Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Message-id: 1441311266-8644-3-git-send-email-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There is an error in functions aarch64_sync_32_to_64() and
aarch64_sync_64_to_32() with mapping of registers between AArch32 and
AArch64. This commit fixes the mapping to match the v8 ARM ARM
section D1.20.1 (table D1-77).
Signed-off-by: Sergey Sorokin <afarallax@yandex.ru>
Message-id: 1440796451-15276-1-git-send-email-afarallax@yandex.ru
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: tidied commit message a bit]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
All of these hw_errors are fatal and indicate something wrong with
QEMU implementation.
Convert to g_assert_not_reached.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Message-id: 169194d09017e5725535d31a1507d454c0043706.1440842587.git.crosthwaite.peter@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Print semihosting debugging information before the
do_arm_semihosting() call so that angel_SWIreason_ReportException,
which causes the function to not return, gets the same debug prints as
other semihosting calls. Also print out the semihosting call number.
Signed-off-by: Christopher Covington <christopher.covington@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Christopher Covington <cov@codeaurora.org>
Message-id: 1439483745-28752-3-git-send-email-peter.maydell@linaro.org
Implement the AArch64 TLBI operations which take an intermediate
physical address and invalidate stage 2 translations.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1439548879-1972-7-git-send-email-peter.maydell@linaro.org
Implement the remaining stage 1 TLB invalidate operations
visible from EL3.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1439548879-1972-6-git-send-email-peter.maydell@linaro.org
Implement the missing TLBI operations that exist only
if EL2 is implemented.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1439548879-1972-5-git-send-email-peter.maydell@linaro.org
Now we have the ability to flush the TLB only for specific MMU indexes,
update the AArch64 TLB maintenance instruction implementations to only
flush the parts of the TLB they need to, rather than doing full flushes.
We take the opportunity to remove some duplicate functions (the per-asid
tlb ops work like the non-per-asid ones because we don't support
flushing a TLB only by ASID) and to bring the function names in line
with the architectural TLBI operation names.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1439548879-1972-4-git-send-email-peter.maydell@linaro.org
Move the two regdefs for TLBI ALLE1 and TLBI ALLE1IS down so that the
whole set of AArch64 TLBI regdefs is arranged in numeric order.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1439548879-1972-3-git-send-email-peter.maydell@linaro.org
Implement the AArch32 ATS1H* operations which perform
Hyp mode stage 1 translations.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1437751263-21913-6-git-send-email-peter.maydell@linaro.org
Apply the correct conditions in the ats_access() function for
the ATS12NSO* address translation operations:
* succeed at EL2 or EL3
* normal UNDEF trap from NS EL1
* trap to EL3 from S EL1 (only possible if EL3 is AArch64)
(This change means they're now available in our EL3-supporting
CPUs when they would previously always UNDEF.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1437751263-21913-5-git-send-email-peter.maydell@linaro.org
Wire up the AArch64 EL2 and EL3 address translation operations
(AT S12E1*, AT S12E0*, AT S1E2*, AT S1E3*), and correct some
errors in the ats_write64() function in previously unused code
that would have done the wrong kind of lookup for accesses from
EL3 when SCR.NS==0.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1437751263-21913-3-git-send-email-peter.maydell@linaro.org
For EL2 stage 1 translations, there is no TTBR1. We were already
handling this for 64-bit EL2; add the code to take the 'no TTBR1'
code path for 64-bit EL2 as well.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1437751263-21913-2-git-send-email-peter.maydell@linaro.org
We already implemented ACTLR_EL1; add the missing ACTLR_EL2 and
ACTLR_EL3, for consistency.
Since we don't currently have any CPUs that need the EL2/EL3
versions to reset to non-zero values, implement as RAZ/WI.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1438281398-18746-5-git-send-email-peter.maydell@linaro.org
The AFSR registers are implementation dependent auxiliary fault
status registers. We already implemented a RAZ/WI AFSR0_EL1 and
AFSR_EL1; add the missing AFSR{0,1}_EL{2,3} for consistency.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1438281398-18746-4-git-send-email-peter.maydell@linaro.org
The AMAIR registers are for providing auxiliary implementation
defined memory attributes. We already implemented a RAZ/WI
AMAIR_EL1; add the EL2 and EL3 versions for consistency.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1438281398-18746-3-git-send-email-peter.maydell@linaro.org
Add the AArch64 registers MAIR_EL3 and TPIDR_EL3, which are the only
two which we had implemented the 32-bit Secure equivalents of but
not the 64-bit Secure versions.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1438281398-18746-2-git-send-email-peter.maydell@linaro.org
If EL3 is AArch32, then the secure physical timer is accessed via
banking of the registers used for the non-secure physical timer.
Implement this banking.
Note that the access controls for the AArch32 banked registers
remain the same as the physical-timer checks; they are not the
same as the controls on the AArch64 secure timer registers.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1437047249-2357-3-git-send-email-peter.maydell@linaro.org
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
On CPUs with EL3, there are two physical timers, one for Secure and one
for Non-secure. Implement this extra timer and the AArch64 registers
which access it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1437047249-2357-2-git-send-email-peter.maydell@linaro.org
It's easy to accidentally define two cpregs which both try
to reset the same underlying state field (for instance a
clash between an AArch64 EL3 definition and an AArch32
banked register definition). if the two definitions disagree
about the reset value then the result is dependent on which
one happened to be reached last in the hashtable enumeration.
Add a consistency check to detect and assert in these cases:
after reset, we run a second pass where we check that the
reset operation doesn't change the value of the register.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1436797559-20835-1-git-send-email-peter.maydell@linaro.org
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1436791864-4582-6-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Prepare for adding the Hypervisor timer, no functional change.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1436791864-4582-5-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename gt_cnt_reset to gt_timer_reset as the function really
resets the timers and not the counters. Move the registration
from counter regs to timer regs.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1436791864-4582-4-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Adds control for trapping selected timer and counter accesses to EL2.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1436791864-4582-3-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Adds support for the virtual timer offset controlled by EL2.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1436791864-4582-2-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The SCTLR_EL3 cpreg definition was implicitly resetting the
register state to 0, which is both wrong and clashes with
the reset done via the SCTLR definition (since sctlr[3]
is unioned with sctlr_s). This went unnoticed until recently,
when an unrelated change (commit a903c449b4) happened to
perturb the order of enumeration through the cpregs hashtable for
reset such that the erroneous reset happened after the correct one
rather than before it. Fix this by marking SCTLR_EL3 as an alias,
so its reset is left up to the AArch32 view.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
TLBI ALLE1IS is an operation that does invalidate TLB entries on all PEs
in the same Inner Sharable domain, not just on the current CPU. So we
must use tlbiall_is_write() here.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1435676538-31345-1-git-send-email-serge.fdrv@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove semihosting_enabled and semihosting_target and replace them with
SemihostingConfig structure containing equivalent fields. The structure
is defined in vl.c where it is actually set.
Also introduce separate header file include/exec/semihost.h allowing to
access semihosting config related stuff from target specific semihosting
code.
Signed-off-by: Leon Alrae <leon.alrae@imgtec.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1434643256-16858-2-git-send-email-leon.alrae@imgtec.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Unified MPU only. Uses ARM architecture major revision to switch
between PMSAv5 and v7 when ARM_FEATURE_MPU is set. PMSA v6 remains
unsupported and is asserted against.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: dcb03cda6dd754c5cc6a962fa11f25089811e954.1434501320.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Define the arm CP registers for PMSAv7 and their accessor functions.
RGNR serves as a shared index that indexes into arrays storing the
DRBAR, DRSR and DRACR registers. DRBAR and friends have to be VMSDd
separately from the CP interface using a new PMSA specific VMSD
subsection.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 172cf135fbd8f5cea413c00e71cc1c3cac704744.1434501320.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Define the MPUIR register for MPU supporting ARMv6 and onwards.
Currently we only support unified MPU.
The size of the unified MPU is defined via the number of "dregions".
So just a single config is added to specify this size. (When split MPU
is implemented we will add an extra iregions config).
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 9f248950b803a08c8b3c978931663182f7e882e7.1434501320.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
cp_reg_reset() is called from g_hash_table_foreach() which does not
define a specific ordering of the hash table iteration. Thus doing reset
for registers marked as ALIAS would give an ambiguous result when
resetvalue is different for original and alias registers. Exit
cp_reg_reset() early when passed an alias register. Then clean up alias
register definitions from needless resetvalue and resetfn.
In particular, this fixes a bug in the handling of the PMCR register,
which had different resetvalues for its 32 and 64-bit views.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1434554713-10220-1-git-send-email-serge.fdrv@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add a boolean for indicating uniprocessors with MP extensions. This
drives the U bit in MPIDR. Prepares support for Cortex-R5.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: a70a80583df265e0174f01fa1fc92b33ea6d1db5.1434066412.git.peter.crosthwaite@xilinx.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Currently, the return code for get_phys_addr is overloaded for both
success/fail and FSR value return. This doesn't handle the case where
there is an error with a 0 FSR. This case exists in PMSAv7.
So rework get_phys_addr and friends to return a success/failure boolean
return code and populate the FSR via a caller provided uint32_t
pointer.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: a209e3d8ae00cda55260c970891f520210e26bad.1434066412.git.peter.crosthwaite@xilinx.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
V6+ PMSA and VMSA share some common registers that are currently
in the VMSA definition block. Split them out into a new def that can
be shared to PMSA.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 284db78a43c63c9bfbb60de539672c361bcb6af8.1434066412.git.peter.crosthwaite@xilinx.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These registers are VMSA specific so they should be conditional on
VMSA (i.e. !MPU).
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 7bb8843e45f2635c6b7a583c5bb5da51ed4442a0.1434066412.git.peter.crosthwaite@xilinx.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
If doing a PMSA (MPU) system do not define the VMSA specific TLBTR CP.
The def is done separately from VMSA registers group as it is affected
by both the OMAP/STRONGARM RW errata and the MIDR backgrounding.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: b03fea3840207edf633f5c9189400c3dd6a28d14.1434066412.git.peter.crosthwaite@xilinx.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When we're using KVM, the kernel's internal idea of the MPIDR
affinity fields must match the values we tell it for the guest
vcpu cluster configuration in the device tree. Since at the moment
the kernel doesn't support letting userspace tell it the correct
affinity fields to use, we must read the kernel's view and
reflect that back in the device tree.
Signed-off-by: Shlomo Pongratz <shlomo.pongratz@huawei.com>
Signed-off-by: Pavel Fedin <p.fedin@samsung.com>
Message-id: 02f601d0a1e6$90c7d630$b2578290$@samsung.com
[PMM: Use a local #define rather than a global variable for
the TCG ARM_CPUS_PER_CLUSTER setting. Tweak a comment. Update the
commit message.]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
According to ARMv8 ARM, there are additional aliases to MIDR system register in
AArch32 state. So add them to the list.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1433321048-23793-3-git-send-email-serge.fdrv@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
According to ARM Cortex-A53/A57 TRM, REVIDR reset value should be zero. So let
REVIDR reset value be specified by CPU model and correct it for Cortex-A53/A57.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1433321048-23793-2-git-send-email-serge.fdrv@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Since ARMv7 with LPAE support, a supersection short translation table
descriptor has had extended base address fields which hold bits 39:32 of
translated address. These fields are IMPDEF in ARMv6 and ARMv7 without
LPAE support.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1433235718-30485-1-git-send-email-serge.fdrv@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The old ARMv5-style page table format includes a kind of second level
descriptor named the "extended small page" format, whose primary purpose
is to allow specification of the TEX memory attribute bits on a 4K page.
This exists on ARMv6 and also (as an implementation extension) on XScale
CPUs; it's UNPREDICTABLE on v5.
We were mishandling this in two ways:
(1) we weren't implementing it for v6 (probably never noticed because
Linux will use the new-style v6 page table format there)
(2) we were not correctly setting the page_size, which is 4K, not 1K
The latter bug went unnoticed for years because the only thing which
the page_size affects is which TLB entries get flushed when the guest
does a TLB invalidate on an address in the page, and prior to commit
2f0d8631b7 we were doing a full TLB flush very frequently due to Linux's
habit of writing the SCTLR pointlessly a lot.
(We can assume that after commit 2f0d8631b7 the bug went unnoticed
for a year because nobody's actually using the Zaurus/XScale emulation...)
Report the correct page size for these descriptors, and permit them
on ARMv6 CPUs. This fixes a problem where a kernel image for Zaurus
can boot the kernel OK but gets random segfaults when it tries to
run userspace programs.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1432844085-16441-1-git-send-email-peter.maydell@linaro.org
The ARMCPRegInfo arrays v8_el3_no_el2_cp_reginfo and v8_el2_cp_reginfo
are actually used on non-v8 CPUs as well. Remove the incorrect v8_
prefix from their names.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 1433182716-6400-1-git-send-email-peter.maydell@linaro.org
Since we now require GLib 2.22+ (commit f40685c), we don't have to
work around lack of g_hash_table_get_keys() anymore.
This reverts commit 82a3a11897.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1432749090-4698-1-git-send-email-armbru@redhat.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1432881807-18164-11-git-send-email-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1432881807-18164-10-git-send-email-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1432881807-18164-9-git-send-email-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1432881807-18164-8-git-send-email-edgar.iglesias@gmail.com
[PMM: Switch to preferred opc1/crm order for 64-bit AArch32 cpregs;
drop unneeded use of vmsa_ttbr_writefn]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1432881807-18164-7-git-send-email-edgar.iglesias@gmail.com
[PMM: reordered fields into preferred opc0/opc1/crn/crm/opc2 order]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1432881807-18164-6-git-send-email-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1432881807-18164-5-git-send-email-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1432881807-18164-4-git-send-email-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Break down the overly broad wildcard definition of TLB_LOCKDOWN
down to v7 level.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1432881807-18164-3-git-send-email-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Adds CPTR_EL2/3 system registers definitions and access function.
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
[PMM: merge CPTR_EL2 and HCPTR definitions into a single
def using STATE_BOTH;
don't use readfn/writefn to implement RAZ/WI registers;
don't use accessfn for the no-EL2 CPTR_EL2;
fix cpacr_access logic to catch EL2 accesses to CPACR being
trapped to EL3;
use new CP_ACCESS_TRAP_EL[23] rather than setting
exception.target_el directly]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Updated the interrupt handling to utilize and report through the target EL
exception field. This includes consolidating and cleaning up code where
needed. Target EL is now calculated once in arm_cpu_exec_interrupt() and
do_interrupt was updated to use the target_el exception field. The
necessary code from arm_excp_target_el() was merged in where needed and the
function removed.
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1429722561-12651-4-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move the code which sets exception information out of
arm_cpu_handle_mmu_fault and into tlb_fill. tlb_fill
is the only caller which wants to raise_exception()
so it makes more sense for it to handle the whole of
the exception setup.
As part of this cleanup, move the user-mode-only
implementation function for the handle_mmu_fault CPU
method into cpu.c so we don't need to make it globally
visible, and rename the softmmu-only utility function
arm_cpu_handle_mmu_fault to arm_tlb_fill so it's clear
that it's not the same thing.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1431499963-1019-4-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Updated get_phys_addr_lpae to check the appropriate TTBCR/TCR depending on the
current EL. Support includes using the different TCR format as well as checks to
insure TTBR1 is not used when in EL2 or EL3.
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1429722561-12651-8-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add a utility function for choosing the correct TTBR system register based on
the specified MMU index. Add use of function on physical address lookup.
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1429722561-12651-7-git-send-email-greg.bellows@linaro.org
[PMM: fixed regime_ttbr() return type to be uint64_t]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The extra information is not yet used but it is now available.
This requires minor changes through all of the tcg backends.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Updated scr_write to always allow updates to the SCR.SMD bit on ARMv8
regardless of whether virtualization (EL2) is enabled or not.
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Message-id: 1429888797-4378-1-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename the field holding CPACR_EL1 system register state in AArch64
naming style.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
[PMM: also fixed a couple of missed occurrences in cpu.c]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add a transaction attribute indicating that a memory access is being
done from user-mode (unprivileged). This corresponds to an equivalent
signal in ARM AMBA buses.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Factor out the page table walk memory accesses into their own function,
so that we can specify the correct S/NS memory attributes for them.
This will also provide a place to use the correct endianness and
handle the need for a stage-2 translation when virtualization is
supported.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Honour the NS bit in ARM page tables:
* when adding entries to the TLB, include the Secure/NonSecure
transaction attribute
* set the NS bit in the PAR when doing ATS operations
Note that we don't yet correctly use the NSTable bit to
cause the page table walk itself to use the right attributes.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
The AArch64 SPSR_EL1 register is architecturally mandated to
be mapped to the AArch32 SPSR_svc register. This means its
state should live in QEMU's env->banked_spsr[1] field.
Correct the various places in the code that incorrectly
put it in banked_spsr[0].
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For the ARM M-profile cores, exception return pops various registers
including the PC from the stack. The architecture defines that if the
lowest bit in the new PC value is set (ie the PC is not halfword
aligned) then behaviour is UNPREDICTABLE. In practice hardware
implementations seem to simply ignore the low bit, and some buggy
RTOSes incorrectly rely on this. QEMU's behaviour was architecturally
permitted, but bringing QEMU into line with the hardware behaviour
allows more guest code to run. We log the situation as a guest error.
This was reported as LP:1428657.
Reported-by: Anders Esbensen <anders@lyes.dk>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This patch makes the following changes to the determination of
whether an address is executable, when translating addresses
using LPAE.
1. No longer assumes that PL0 can't execute when it can't read.
It can in AArch64, a difference from AArch32.
2. Use va_size == 64 to determine we're in AArch64, rather than
arm_feature(env, ARM_FEATURE_V8), which is insufficient.
3. Add additional XN determinants
- NS && is_secure && (SCR & SCR_SIF)
- WXN && (prot & PAGE_WRITE)
- AArch64: (prot_PL0 & PAGE_WRITE)
- AArch32: UWXN && (prot_PL0 & PAGE_WRITE)
- XN determination should also work in secure mode (untested)
- XN may even work in EL2 (currently impossible to test)
4. Cleans up the bloated PAGE_EXEC condition - by removing it.
The helper get_S1prot is introduced. It may even work in EL2,
when support for that comes, but, as the function name implies,
it only works for stage 1 translations.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Message-id: 1426099139-14463-4-git-send-email-drjones@redhat.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Introduce simple_ap_to_rw_prot(), which has the same behavior as
ap_to_rw_prot(), but takes the 2-bit simple AP[2:1] instead of
the 3-bit AP[2:0]. Use this in get_phys_addr_v6 when SCTLR_AFE
is set, as that bit indicates we should be using the simple AP
format.
It's unlikely this path is getting used. I don't see CR_AFE
getting used by Linux, so possibly not. If it had been, then
the check would have been wrong for all but AP[2:1] = 0b11.
Anyway, this should fix it up, in case it ever does get used.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1426099139-14463-3-git-send-email-drjones@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Instead of mixing access permission checking with access permissions
to page protection flags translation, just do the translation, and
leave it to the caller to check the protection flags against the access
type. Also rename to ap_to_rw_prot to better describe the new behavior.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1426099139-14463-2-git-send-email-drjones@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add AArch32 to AArch64 register sychronization functions.
Replace manual register synchronization with new functions in
aarch64_cpu_do_interrupt() and HELPER(exception_return)().
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1423736974-14254-4-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
f64 exponent in HELPER(recpe_f64) should be compared to 2045 rather than 1023
(FPRecipEstimate in ARMV8 spec). This fixes incorrect underflow handling when
flushing denormals to zero in the FRECPE instructions operating on 64-bit
values.
Signed-off-by: Ildar Isaev <ild@inbox.ru>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This patch fixes the brace style in the code reindented in the
previous commit.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
A few of the oldest parts of the page-table-walk code have broken indent
(either hardcoded tabs or two-spaces). Reindent these sections.
For ease of review, this patch does not touch the brace style and
so is a whitespace-only change.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Now we have the mmu_idx in get_phys_addr(), use it correctly to
determine the behaviour of virtual to physical address translations,
rather than using just an is_user flag and the current CPU state.
Some TODO comments have been added to indicate where changes will
need to be made to add EL2 and 64-bit EL3 support.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
Make all the callers of get_phys_addr() pass it the correct
mmu_idx rather than just a simple "is_user" flag. This includes
properly decoding the AT/ATS system instructions; we include the
logic for handling all the opc1/opc2 cases because we'll need
them later for supporting EL2/EL3, even if we don't have the
regdef stanzas yet.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Instead of simply reusing ats_write() as the handler for both AArch32
and AArch64 address translation operations, use a different function
for each with the common code in a third function. This is necessary
because the semantics for selecting the right translation regime are
different; we are only getting away with sharing currently because
we don't support EL2 and only support EL3 in AArch32.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
We currently claim that for ARM the mmu_idx should simply be the current
exception level. However this isn't actually correct -- secure EL0 and EL1
should have separate indexes from non-secure EL0 and EL1 since their
VA->PA mappings may differ. We also will want an index for stage 2
translations when we properly support EL2.
Define and document all seven mmu index values that we require, and
pass the mmu index in the TB flags rather than exception level or
priv/user bit.
This change doesn't update the get_phys_addr() code, so our page
table walking still assumes a simplistic "user or priv?" model for
the moment.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
---
This leaves some odd gaps in the TB flags usage. I will circle
back and clean this up later (including moving the other common
flags like the singlestep ones to the top of the flags word),
but I didn't want to bloat this patchseries further.
Add assertion checking when cpreg structures are registered that they
either forbid raw-access attempts or at least make an attempt at
handling them. Also add an assert in the raw-accessor-of-last-resort,
to avoid silently doing a read or write from offset zero, which is
actually AArch32 CPU register r0.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1422282372-13735-3-git-send-email-peter.maydell@linaro.org
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
We currently mark ARM coprocessor/system register definitions with
the flag ARM_CP_NO_MIGRATE for two different reasons:
1) register is an alias on to state that's also visible via
some other register, and that other register is the one
responsible for migrating the state
2) register is not actually state at all (for instance the TLB
or cache maintenance operation "registers") and it makes no
sense to attempt to migrate it or otherwise access the raw state
This works fine for identifying which registers should be ignored
when performing migration, but we also use the same functions for
synchronizing system register state between QEMU and the kernel
when using KVM. In this case we don't want to try to sync state
into registers in category 2, but we do want to sync into registers
in category 1, because the kernel might have picked a different
one of the aliases as its choice for which one to expose for
migration. (In particular, on 32 bit hosts the kernel will
expose the state in the AArch32 version of the register, but
TCG's convention is to mark the AArch64 version as the version
to migrate, even if the CPU being emulated happens to be 32 bit,
so almost all system registers will hit this issue now that we've
added AArch64 system emulation.)
Fix this by splitting the NO_MIGRATE flag in two (ALIAS and NO_RAW)
corresponding to the two different reasons we might not want to
migrate a register. When setting up the TCG list of registers to
migrate we honour both flags; when populating the list from KVM,
only ignore registers which are NO_RAW.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
Message-id: 1422282372-13735-2-git-send-email-peter.maydell@linaro.org
[PMM: changed ARM_CP_NO_MIGRATE to ARM_CP_ALIAS on new SP_EL1 and
SP_EL2 reginfo stanzas since there was a (semantic) merge conflict
with the patchset that added those]
Added RVBAR_EL2 and RVBAR_EL3 CP register support. All RVBAR_EL# registers
point to the same location and only the highest EL version exists at any one
time.
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1422029835-4696-3-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Fix the RVBAR_EL1 CP register opc2 encoding from 2 to 1
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1422029835-4696-2-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Greg Bellows <greg.bellows@linaro.org>
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Merge of the v8_el2_cp_reginfo and el3_cp_reginfo ARMCPRegInfo lists.
Previously, some EL3 registers were restricted to the ARMv8 list under the
impression that they were not needed on ARMv7. However, this is not the case
as the ARMv7/32-bit variants rely on the ARMv8/64-bit variants to handle
migration and reset. For this reason they must always exist.
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Message-id: 1418406450-14961-1-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Added CP register info entries for the ARMv7 MAIR0/1 secure banks.
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1416242878-876-26-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When EL3 is running in AArch32 (or ARMv7 with Security Extensions)
FCSEIDR, CONTEXTIDR, TPIDRURW, TPIDRURO and TPIDRPRW have a secure
and a non-secure instance.
Signed-off-by: Fabian Aggeler <aggelerf@ethz.ch>
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1416242878-876-25-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When EL3 is running in Aarch32 (or ARMv7 with Security Extensions)
VBAR has a secure and a non-secure instance, which are mapped to
VBAR_EL1 and VBAR_EL3.
Signed-off-by: Fabian Aggeler <aggelerf@ethz.ch>
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1416242878-876-24-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When EL3 is running in AArch32 (or ARMv7 with Security Extensions)
PAR has a secure and a non-secure instance.
Signed-off-by: Fabian Aggeler <aggelerf@ethz.ch>
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1416242878-876-23-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When EL3 is running in AArch32 (or ARMv7 with Security Extensions)
IFAR and DFAR have a secure and a non-secure instance.
Signed-off-by: Fabian Aggeler <aggelerf@ethz.ch>
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1416242878-876-22-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When EL3 is running in AArch32 (or ARMv7 with Security Extensions)
DFSR has a secure and a non-secure instance.
Signed-off-by: Fabian Aggeler <aggelerf@ethz.ch>
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1416242878-876-21-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When EL3 is running in AArch32 (or ARMv7 with Security Extensions)
IFSR has a secure and a non-secure instance. Adds IFSR32_EL2 definition and
storage.
Signed-off-by: Fabian Aggeler <aggelerf@ethz.ch>
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1416242878-876-20-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When EL3 is running in AArch32 (or ARMv7 with Security Extensions)
DACR has a secure and a non-secure instance. Adds definition for DACR32_EL2.
Signed-off-by: Fabian Aggeler <aggelerf@ethz.ch>
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1416242878-876-19-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Adds secure and non-secure bank register suport for TTBCR.
Added new struct to compartmentalize the TCR data and masks. Removed old
tcr/ttbcr data and added a 4 element array of the new structs in cp15. This
allows for one entry per EL. Added a CP register definition for TCR_EL3.
Signed-off-by: Fabian Aggeler <aggelerf@ethz.ch>
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1416242878-876-18-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Adds secure and non-secure bank register suport for TTBR0 and TTBR1.
Changes include adding secure and non-secure instances of ttbr0 and ttbr1 as
well as a CP register definition for TTBR0_EL3. Added a union containing
both EL based array fields and secure and non-secure fields mapped to them.
Updated accesses to use A32_BANKED_CURRENT_REG_GET macro.
Signed-off-by: Fabian Aggeler <aggelerf@ethz.ch>
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1416242878-876-17-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add checks of SCR AW/FW bits when performing writes of CPSR. These SCR bits
are used to control whether the CPSR masking bits can be adjusted from
non-secure state.
Signed-off-by: Fabian Aggeler <aggelerf@ethz.ch>
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1416242878-876-15-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use MVBAR register as exception vector base address for
exceptions taken to CPU monitor mode.
Signed-off-by: Sergey Fedorov <s.fedorov@samsung.com>
Signed-off-by: Fabian Aggeler <aggelerf@ethz.ch>
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1416242878-876-13-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Added CP register defintions for SDER and SDER32_EL3 as well as cp15.sder for
register storage.
Signed-off-by: Sergey Fedorov <s.fedorov@samsung.com>
Signed-off-by: Fabian Aggeler <aggelerf@ethz.ch>
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1416242878-876-12-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Define a new ARM CP register info list for the ARMv7 Security Extension
feature. Register that list only for ARM cores with Security Extension/EL3
support. Moving AArch32 SCR into Security Extension register group.
Signed-off-by: Sergey Fedorov <s.fedorov@samsung.com>
Signed-off-by: Fabian Aggeler <aggelerf@ethz.ch>
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1416242878-876-9-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Prepare for cp register banking by inserting every cp register twice,
once for secure world and once for non-secure world.
Signed-off-by: Fabian Aggeler <aggelerf@ethz.ch>
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1416242878-876-8-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Added additional NS-bit to CPREG hash encoding. Updated hash lookup
locations to specify hash bit currently set to non-secure.
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1416242878-876-7-git-send-email-greg.bellows@linaro.org
[PMM: fix uses of ENCODE_CP_REG in kvm32.c to add extra argument]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Adds a dedicated function and a lookup table for determining the target
exception level of IRQ and FIQ exceptions. The lookup table is taken from the
ARMv7 and ARMv8 specification exception routing tables.
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1416242878-876-3-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The ARMv8 address translation system defines that a page table walk
starts at a level which depends on the translation granule size
and the number of bits of virtual address that need to be resolved.
Where the translation granule is 64KB and the guest sets the
TCR.TxSZ field to between 35 and 39, it's actually possible to
start at level 3 (the final level). QEMU's implementation failed
to handle this case, and so we would set level to 2 and behave
incorrectly (including invoking the C undefined behaviour of
shifting left by a negative number). Correct the code that
determines the starting level to deal with the start-at-3 case,
by replacing the if-else ladder with an expression derived from
the ARM ARM pseudocode version.
This error was detected by the Coverity scan, which spotted
the potential shift by a negative number.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1415890569-7454-1-git-send-email-peter.maydell@linaro.org
Implements SMC instruction in AArch32 using the A32 syndrome. When executing
SMC instruction from monitor CPU mode SCR.NS bit is reset.
Signed-off-by: Sergey Fedorov <s.fedorov@samsung.com>
Signed-off-by: Fabian Aggeler <aggelerf@ethz.ch>
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Message-id: 1413910544-20150-7-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Renamed the arm_current_pl CPU function to more accurately represent that it
returns the ARMv8 EL rather than ARMv7 PL.
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1413910544-20150-5-git-send-email-greg.bellows@linaro.org
[PMM: fixed a minor merge resolution error in a couple of hunks]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The DZP bit in the DCZID system register should be set if
the control bits which prohibit use of the DC ZVA instruction
have been set (it stands for Data Zero Prohibited). However
we had the sense of the test inverted; fix this so that the
bit reads correctly.
To avoid this regressing the behaviour of the user-mode
emulator, we must set the DZE bit in the SCTLR for that
config so that userspace continues to see DZP as zero (it
was getting the correct result by accident previously).
Reported-by: Christopher Covington <cov@codeaurora.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Christopher Covington <cov@codeaurora.org>
Message-id: 1412959792-20708-1-git-send-email-peter.maydell@linaro.org
Add support for handling PSCI calls in system emulation. Both version
0.1 and 0.2 of the PSCI spec are supported. Platforms can enable support
by setting the "psci-conduit" QOM property on the cpus to SMC or HVC
emulation and having a PSCI binding in their dtb.
Signed-off-by: Rob Herring <rob.herring@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1412865028-17725-7-git-send-email-peter.maydell@linaro.org
[PMM: made system reset/off PSCI functions power down the CPU so
we obey the PSCI API requirement never to return from them;
rearranged how the code is plumbed into the exception system,
so that we split "is this a valid call?" from "do the call"]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
User mode emulation should never get interrupts and thus should not
use the system emulation exception handler function. Remove the reference,
and '#ifndef USER_MODE_ONLY' the function itself as well, so that we can add
system mode only functionality to it.
Signed-off-by: Rob Herring <rob.herring@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1412865028-17725-3-git-send-email-peter.maydell@linaro.org
This only implements the external delivery method via the GIC.
Acked-by: Greg Bellows <greg.bellows@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1411718914-6608-12-git-send-email-edgar.iglesias@gmail.com
[PMM: adjusted following cpu-exec refactoring]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1411718914-6608-11-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Introduce new_el and new_mode in preparation for future patches
that add support for taking exceptions to and from EL2 and 3.
No functional change.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1411718914-6608-4-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1411718914-6608-3-git-send-email-edgar.iglesias@gmail.com
[PMM: apply offsetoflow32() to correct regdef]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1411718914-6608-2-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
At the moment we try to handle c15_cpar with the strategy of:
* emit generated code which makes assumptions about its value
* when the register value changes call tb_flush() to throw
away the now-invalid generated code
This works because XScale CPUs are always uniprocessor, but
it's confusing because it suggests that the same approach can
be taken for other registers. It also means we do a tb_flush()
on CPU reset, which makes multithreaded linux-user binaries
even more likely to fail than would otherwise be the case.
Replace it with a combination of TB flags for the access
checks done on cp0/cp1 for the XScale and iwMMXt instructions,
plus a runtime check for cp2..cp13 coprocessor accesses.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1411056959-23070-1-git-send-email-peter.maydell@linaro.org
This patch adds support for setting guest breakpoints
based on values the guest writes to the DBGBVR and DBGBCR
registers. (It doesn't include the code to handle when
these breakpoints fire, so has no guest-visible effect.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1410523465-13400-2-git-send-email-peter.maydell@linaro.org
The ARM architecture defines that the "IS" variants of TLB
maintenance operations must affect all TLBs in the Inner Shareable
domain, which for us means all CPUs. We were incorrectly implementing
these to only affect the current CPU, which meant that SMP TCG
operation was unstable.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1410274883-9578-3-git-send-email-peter.maydell@linaro.org
Cc: qemu-stable@nongnu.org
When we implemented ARMv8 in QEMU we retained our legacy loose
wildcarded decoding of the TLB maintenance operations for v7
and earlier CPUs and provided the correct stricter decode for
v8. However the loose decode is in fact wrong for v7MP, because
it doesn't correctly implement the operations which must apply
to every CPU in the Inner Shareable domain.
Move the legacy wildcarding from the not_v8 reginfo array
into the not_v7 array, and move the strictly decoded operations
from the v8 reginfo to v7 or v7mp arrays as appropriate.
Cache and TLB lockdown legacy wildcarding remains in the
not_v8 array for the moment.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1410274883-9578-2-git-send-email-peter.maydell@linaro.org
Cc: qemu-stable@nongnu.org
Implement debug registers DBGVCR, OSDLR_EL1 and MDCCSR_EL0
(as dummy or limited-functionality). 32 bit Linux kernels will
access these at startup so they are required for breakpoints
and watchpoints to be supported.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
MDSCR_EL1 has actual functionality now; remove the out of date
comment that claims it is a dummy implementation.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For debug exceptions taken to AArch32 we have to set the
DBGDSCR.MOE (Method Of Entry) bits; we can identify the
kind of debug exception from the information in
exception.syndrome.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move the utility function extended_addresses_enabled() into
internals.h; we're going to need to call it from op_helper.c.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Implement support for setting QEMU watchpoints based on the
values the guest writes to the ARM architected watchpoint
registers. (We do not yet report the firing of the watchpoints
to the guest, so they will just be ignored.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is the function that is called when writing to the
PMCCFILTR_EL0 register
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 73da3da6404855b17d5ae82975a32ff3a4dcae3d.1409025949.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the old PMCCNTR code and replace it with calls to the new
pmccntr_sync() and arm_ccnt_enabled() functions.
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 693a6e437d915c2195fd3dc7303f384ca538b7bf.1409025949.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is used to synchronise the PMCCNTR counter and swap its
state between enabled and disabled if required. It must always
be called twice, both before and after any logic that could
change the state of the PMCCNTR counter.
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 62811d4c0f7b1384f7aab62ea2fcfda3dcb0db50.1409025949.git.peter.crosthwaite@xilinx.com
[PMM: fixed minor typos in pmccntr_sync doc comment]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Include a helper function to determine if the CCNT counter
is enabled.
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: e1a64f17a756e06c8bda8238ad4826d705049f7a.1409025949.git.peter.crosthwaite@xilinx.com
[ PC changes
* Remove EL based checks
]
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This patch adds support for the ARMv8 version of the PMCCNTR and
related registers. It also starts to implement the PMCCFILTR_EL0
register.
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: b5d1094764a5416363ee95216799b394ecd011e8.1409025949.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The register is now 64bit, however a 32 bit write to the register
should leave the higher bits unchanged. The open coded write handler
does not implement this, so we need to read-modify-write accordingly.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Reviewed-by: Alistair Francis <alistair23@gmail.com>
Message-id: ec350573424bb2adc1701c3b9278d26598e2f2d1.1409025949.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This makes the PMCCNTR register 64-bit to allow for the
64-bit ARMv8 version.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Message-id: 6c5bac5fd0ea54963b1fc0e7f9464909f2e19a73.1409025949.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Now that all the new code to support single-stepping is in
place, wire up the guest-visible MDSCR_EL1, so the guest
can enable single-stepping.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
When an exception is taken to AArch32, we must clear the PSTATE.SS
bit for the exception handler, and must also ensure that the SS bit
is not set in the value saved to SPSR_<mode>. Achieve both of these
aims by clearing the bit in uncached_cpsr before saving it to the SPSR.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Allow each CPU type to specify the value for the debug ID
registers, by putting them in the ARMCPU struct, and use
the resulting information to only expose the correct number
of watchpoint and breakpoint registers for the CPU.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Bring the 32 bit and 64 bit views of the debug registers into
line by providing the same set of registers in both cases.
(This still isn't a complete set, but it is consistent.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Currently the STATE_BOTH shorthand for allowing a single reginfo struct
to define handling for both AArch32 and AArch64 views of a register
only permits this where the AArch32 view is in cp15. It turns out that
the debug registers in cp14 also have neatly lined up encodings;
allow these also to share reginfo structs by permitting a STATE_BOTH
reginfo to specify the .cp field (and continue to default to 15 if
it is not specified).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
At the moment we have a mixed set of mostly dummy register
definitions for various debug related registers which have
been added piecemeal in order to get Linux kernels to boot.
In preparation for actually implementing debug support,
bring them all together into one place.
This commit doesn't change behaviour: we still expose
exactly the same registers and behaviour to the guest
in all configurations.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
According to the ARM ARM we weren't correctly flushing the TLB entries
where bits 63:56 didn't match bit 55 of the virtual address. This
exposed a problem when we switched QEMU's internal TARGET_PAGE_BITS to
12 for aarch64.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1406733627-24255-3-git-send-email-alex.bennee@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Otherwise we break quickly when we change TARGET_PAGE_SIZE.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1406733627-24255-2-git-send-email-alex.bennee@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Static code analyzers complain about a dubious & operation used for a
boolean value. The code does not test the PSTATE_SP bit as it should.
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Message-id: 1406359601-25583-1-git-send-email-sw@weilnetz.de
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>