Commit Graph

693 Commits

Author SHA1 Message Date
Richard Henderson
10e0b33c67 target/arm: Use tcg_gen_gvec_dup_i64 for LD[1-4]R
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181011205206.3552-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:51:36 +01:00
Richard Henderson
7108e255c2 target/arm: Don't call tcg_clear_temp_count
This is done generically in translator_loop.

Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20181011205206.3552-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:51:36 +01:00
Richard Henderson
a7d8143aed target/arm: Hoist address increment for vector memory ops
This can reduce the number of opcodes required for certain
complex forms of load-multiple (e.g. ld4.16b).

Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 20181011205206.3552-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:51:36 +01:00
Peter Maydell
4be42f4013 target/arm: Report correct syndrome for FP/SIMD traps to Hyp mode
For traps of FP/SIMD instructions to AArch32 Hyp mode, the syndrome
provided in HSR has more information than is reported to AArch64.
Specifically, there are extra fields TA and coproc which indicate
whether the trapped instruction was FP or SIMD. Add this extra
information to the syndromes we construct, and mask it out when
taking the exception to AArch64.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181012144235.19646-11-peter.maydell@linaro.org
2018-10-24 07:51:36 +01:00
Peter Maydell
2ed08180db target/arm: Get IL bit correct for v7 syndrome values
For the v7 version of the Arm architecture, the IL bit in
syndrome register values where the field is not valid was
defined to be UNK/SBZP. In v8 this is RES1, which is what
QEMU currently implements. Handle the desired v7 behaviour
by squashing the IL bit for the affected cases:
 * EC == EC_UNCATEGORIZED
 * prefetch aborts
 * data aborts where ISV is 0

(The fourth case listed in the v8 Arm ARM DDI 0487C.a in
section G7.2.70, "illegal state exception", can't happen
on a v7 CPU.)

This deals with a corner case noted in a comment.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181012144235.19646-10-peter.maydell@linaro.org
2018-10-24 07:51:36 +01:00
Peter Maydell
64b91e3f89 target/arm: New utility function to extract EC from syndrome
Create and use a utility function to extract the EC field
from a syndrome, rather than open-coding the shift.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181012144235.19646-9-peter.maydell@linaro.org
2018-10-24 07:51:36 +01:00
Peter Maydell
eadb2febf0 target/arm: Implement HCR.PTW
If the HCR_EL2 PTW virtualizaiton configuration register bit
is set, then this means that a stage 2 Permission fault must
be generated if a stage 1 translation table access is made
to an address that is mapped as Device memory in stage 2.
Implement this.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181012144235.19646-8-peter.maydell@linaro.org
2018-10-24 07:51:36 +01:00
Peter Maydell
8a0fc3a29f target/arm: Implement HCR.VI and VF
The HCR_EL2 VI and VF bits are supposed to track whether there is
a pending virtual IRQ or virtual FIQ. For QEMU we store the
pending VIRQ/VFIQ status in cs->interrupt_request, so this means:
 * if the register is read we must get these bit values from
   cs->interrupt_request
 * if the register is written then we must write the bit
   values back into cs->interrupt_request

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181012144235.19646-7-peter.maydell@linaro.org
2018-10-24 07:51:36 +01:00
Peter Maydell
636540e9c4 target/arm: ISR_EL1 bits track virtual interrupts if IMO/FMO set
The A/I/F bits in ISR_EL1 should track the virtual interrupt
status, not the physical interrupt status, if the associated
HCR_EL2.AMO/IMO/FMO bit is set. Implement this, rather than
always showing the physical interrupt status.

We don't currently implement anything to do with external
aborts, so this applies only to the I and F bits (though it
ought to be possible for the outer guest to present a virtual
external abort to the inner guest, even if QEMU doesn't
emulate physical external aborts, so there is missing
functionality in this area).

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181012144235.19646-6-peter.maydell@linaro.org
2018-10-24 07:51:36 +01:00
Peter Maydell
9d1bab337c target/arm: Implement HCR.DC
The HCR.DC virtualization configuration register bit has the
following effects:
 * SCTLR.M behaves as if it is 0 for all purposes except
   direct reads of the bit
 * HCR.VM behaves as if it is 1 for all purposes except
   direct reads of the bit
 * the memory type produced by the first stage of the EL1&EL0
   translation regime is Normal Non-Shareable,
   Inner Write-Back Read-Allocate Write-Allocate,
   Outer Write-Back Read-Allocate Write-Allocate.

Implement this behaviour.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181012144235.19646-5-peter.maydell@linaro.org
2018-10-24 07:51:34 +01:00
Peter Maydell
b4ab8ce98b target/arm: Implement HCR.FB
The HCR.FB virtualization configuration register bit requests that
TLB maintenance, branch predictor invalidate-all and icache
invalidate-all operations performed in NS EL1 should be upgraded
from "local CPU only to "broadcast within Inner Shareable domain".
For QEMU we NOP the branch predictor and icache operations, so
we only need to upgrade the TLB invalidates:
 AArch32 TLBIALL, TLBIMVA, TLBIASID, DTLBIALL, DTLBIMVA, DTLBIASID,
         ITLBIALL, ITLBIMVA, ITLBIASID, TLBIMVAA, TLBIMVAL, TLBIMVAAL
 AArch64 TLBI VMALLE1, TLBI VAE1, TLBI ASIDE1, TLBI VAAE1,
         TLBI VALE1, TLBI VAALE1

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181012144235.19646-4-peter.maydell@linaro.org
2018-10-24 07:51:34 +01:00
Peter Maydell
affdb64d84 target/arm: Make switch_mode() file-local
The switch_mode() function is defined in target/arm/helper.c and used
only in that file and nowhere else, so we can make it file-local
rather than global.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181012144235.19646-3-peter.maydell@linaro.org
2018-10-24 07:51:33 +01:00
Peter Maydell
81e3728407 target/arm: Improve debug logging of AArch32 exception return
For AArch32, exception return happens through certain kinds
of CPSR write. We don't currently have any CPU_LOG_INT logging
of these events (unlike AArch64, where we log in the ERET
instruction). Add some suitable logging.

This will log exception returns like this:
Exception return from AArch32 hyp to usr PC 0x80100374

paralleling the existing logging in the exception_return
helper for AArch64 exception returns:
Exception return from AArch64 EL2 to AArch64 EL0 PC 0x8003045c
Exception return from AArch64 EL2 to AArch32 EL0 PC 0x8003045c

(Note that an AArch32 exception return can only be
AArch32->AArch32, never to AArch64.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181012144235.19646-2-peter.maydell@linaro.org
2018-10-24 07:51:32 +01:00
Richard Henderson
5763190fa8 target/arm: Convert v8.2-fp16 from feature bit to aa64pfr0 test
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181016223115.24100-9-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:51:31 +01:00
Richard Henderson
cd208a1c39 target/arm: Convert sve from feature bit to aa64pfr0 test
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181016223115.24100-8-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:51:29 +01:00
Richard Henderson
09cbd50198 target/arm: Convert jazelle from feature bit to isar1 test
Having V6 alone imply jazelle was wrong for cortex-m0.
Change to an assertion for V6 & !M.

This was harmless, because the only place we tested ARM_FEATURE_JAZELLE
was for 'bxj' in disas_arm(), which is unreachable for M-profile cores.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181016223115.24100-6-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:50:17 +01:00
Richard Henderson
7e0cf8b47f target/arm: Convert division from feature bits to isar0 tests
Both arm and thumb2 division are controlled by the same ISAR field,
which takes care of the arm implies thumb case.  Having M imply
thumb2 division was wrong for cortex-m0, which is v6m and does not
have thumb2 at all, much less thumb2 division.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181016223115.24100-5-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:50:16 +01:00
Richard Henderson
962fcbf2ef target/arm: Convert v8 extensions from feature bits to isar tests
Most of the v8 extensions are self-contained within the ISAR
registers and are not implied by other feature bits, which
makes them the easiest to convert.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181016223115.24100-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:50:16 +01:00
Richard Henderson
5256df880d target/arm: V8M should not imply V7VE
Instantiating mps2-an505 (cortex-m33) will fail make check when
V7VE asserts that ID_ISAR0.Divide includes ARM division.  It is
also wrong to include ARM_FEATURE_LPAE.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181016223115.24100-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:50:16 +01:00
Richard Henderson
47576b94af target/arm: Move some system registers into a substructure
Create struct ARMISARegisters, to be accessed during translation.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181016223115.24100-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:50:16 +01:00
Dongjiu Geng
202ccb6bab target/arm: Add support for VCPU event states
This patch extends the qemu-kvm state sync logic with support for
KVM_GET/SET_VCPU_EVENTS, giving access to yet missing SError exception.
And also it can support the exception state migration.

The SError exception states include SError pending state and ESR value,
the kvm_put/get_vcpu_events() will be called when set or get system
registers. When do migration, if source machine has SError pending,
QEMU will do this migration regardless whether the target machine supports
to specify guest ESR value, because if target machine does not support that,
it can also inject the SError with zero ESR value.

Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1538067351-23931-3-git-send-email-gengdongjiu@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-24 07:50:16 +01:00
Richard Henderson
62823083b8 target/arm: Check HAVE_CMPXCHG128 at translate time
Reviewed-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-10-18 19:46:53 -07:00
Richard Henderson
1ec182c333 target/arm: Convert to HAVE_CMPXCHG128
Reviewed-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-10-18 19:46:53 -07:00
Peter Maydell
ab44c7b71f target/arm: Initialize ARMMMUFaultInfo in v7m_stack_read/write
The get_phys_addr() functions take a pointer to an ARMMMUFaultInfo
struct, which they fill in only if a fault occurs. This means that
the caller must always zero-initialize the struct before passing
it in. We forgot to do this in v7m_stack_read() and v7m_stack_write().
Correct the error.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181011172057.9466-1-peter.maydell@linaro.org
2018-10-16 17:14:55 +01:00
Aaron Lindsay
599b71e277 target/arm: Mask PMOVSR writes based on supported counters
This is an amendment to my earlier patch:
    commit 7ece99b17e
    Author: Aaron Lindsay <alindsay@codeaurora.org>
    Date:   Thu Apr 26 11:04:39 2018 +0100

	target/arm: Mask PMU register writes based on PMCR_EL0.N

Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181010203735.27918-3-aclindsa@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-16 17:14:55 +01:00
Aaron Lindsay
fc5f6856a0 target/arm: Mark PMINTENCLR and PMINTENCLR_EL1 accesses as possibly doing IO
I previously fixed this for PMINTENSET_EL1, but missed these.

Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Signed-off-by: Aaron Lindsay <aclindsa@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181010203735.27918-2-aclindsa@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-16 17:14:55 +01:00
Edgar E. Iglesias
f11b452b95 target/arm: Add the Cortex-A72
Add the ARM Cortex-A72.

Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 20181011021931.4249-11-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-16 17:14:55 +01:00
Edgar E. Iglesias
86278c33d1 target-arm: powerctl: Enable HVC when starting CPUs to EL2
When QEMU provides the equivalent of the EL3 firmware, we
need to enable HVCs in scr_el3 when turning on CPUs that
target EL2.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 20181011021931.4249-10-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-16 17:14:55 +01:00
Richard Henderson
37bdda89eb target/arm: Fix cortex-a7 id_isar0
The incorrect value advertised only thumb2 div without arm div.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181008212205.17752-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-16 16:16:42 +01:00
Richard Henderson
aaab8f3400 target/arm: Align cortex-r5 id_isar0
The missing nibble made it more difficult to read.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181008212205.17752-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-16 16:16:42 +01:00
Richard Henderson
a62e62af9f target/arm: Define fields of ISAR registers
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181008212205.17752-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-16 16:16:42 +01:00
Richard Henderson
9a05f7b674 target/arm: Fix aarch64_sve_change_el wrt EL0
At present we assert:

  arm_el_is_aa64: Assertion `el >= 1 && el <= 3' failed.

The comment in arm_el_is_aa64 explains why asking about EL0 without
extra information is impossible.  Add an extra argument to provide
it from the surrounding context.

Fixes: 0ab5953b00
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181008212205.17752-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-16 16:16:42 +01:00
Peter Maydell
167765f073 target/arm: Add v8M stack checks for MSR to SP_NS
Updating the NS stack pointer via MSR to SP_NS should include
a check whether the new SP value is below the stack limit.
No other kinds of update to the various stack pointer and
limit registers via MSR should perform a check.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002163556.10279-14-peter.maydell@linaro.org
2018-10-08 14:55:05 +01:00
Peter Maydell
8a954faf54 target/arm: Add v8M stack checks for VLDM/VSTM
Add the v8M stack checks for the VLDM/VSTM
(aka VPUSH/VPOP) instructions. This code is currently
unreachable because we haven't yet implemented M profile
floating point support, but since the change is simple,
we add it now because otherwise we're likely to forget to
do it later.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002163556.10279-13-peter.maydell@linaro.org
2018-10-08 14:55:05 +01:00
Peter Maydell
aa369e5c08 target/arm: Add v8M stack checks for Thumb push/pop
Add v8M stack checks for the 16-bit Thumb push/pop
encodings: STMDB, STMFD, LDM, LDMIA, LDMFD.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002163556.10279-12-peter.maydell@linaro.org
2018-10-08 14:55:05 +01:00
Peter Maydell
0bc003bad9 target/arm: Add v8M stack checks for T32 load/store single
Add v8M stack checks for the instructions in the T32
"load/store single" encoding class: these are the
"immediate pre-indexed" and "immediate, post-indexed"
LDR and STR instructions.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002163556.10279-11-peter.maydell@linaro.org
2018-10-08 14:55:04 +01:00
Peter Maydell
7c0ed88e7d target/arm: Add v8M stack checks for Thumb2 LDM/STM
Add the v8M stack checks for:
 * LDM (T2 encoding)
 * STM (T2 encoding)

This includes the 32-bit encodings of the instructions listed
in v8M ARM ARM rule R_YVWT as
 * LDM, LDMIA, LDMFD
 * LDMDB, LDMEA
 * POP (multiple registers)
 * PUSH (muliple registers)
 * STM, STMIA, STMEA
 * STMDB, STMFD

We perform the stack limit before doing any other part
of the load or store.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002163556.10279-10-peter.maydell@linaro.org
2018-10-08 14:55:04 +01:00
Peter Maydell
910d7692e5 target/arm: Add v8M stack checks for LDRD/STRD (imm)
Add the v8M stack checks for:
 * LDRD (immediate)
 * STRD (immediate)

Loads and stores are more complicated than ADD/SUB/MOV, because we
must ensure that memory accesses below the stack limit are not
performed, so we can't simply do the check when we actually update
SP.

For these instructions, if the stack limit check triggers
we must not:
 * perform any memory access below the SP limit
 * update PC, SP or the load/store base register
but it is IMPDEF whether we:
 * perform any accesses above or equal to the SP limit
 * update destination registers for loads

For QEMU we choose to always check the limit before doing any other
part of the load or store, so we won't update any registers or
perform any memory accesses.

It is UNKNOWN whether the limit check triggers for a load or store
where the initial SP value is below the limit and one of the stores
would be below the limit, but the writeback moves SP to above the
limit.  For QEMU we choose to trigger the check in this situation.

Note that limit checks happen only for loads and stores which update
SP via writeback; they do not happen for loads and stores which
simply use SP as a base register.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002163556.10279-9-peter.maydell@linaro.org
2018-10-08 14:55:04 +01:00
Peter Maydell
597610eb39 target/arm: Add v8M stack limit checks on NS function calls
Check the v8M stack limits when pushing the frame for a
non-secure function call via BLXNS.

In order to be able to generate the exception we need to
promote raise_exception() from being local to op_helper.c
so we can call it from helper.c.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002163556.10279-8-peter.maydell@linaro.org
2018-10-08 14:55:04 +01:00
Peter Maydell
c32da7aa62 target/arm: Add v8M stack checks on exception entry
Add checks for breaches of the v8M stack limit when the
stack pointer is decremented to push the exception frame
for exception entry.

Note that the exception-entry case is unique in that the
stack pointer is updated to be the limit value if the limit
is hit (per rule R_ZLZG).

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002163556.10279-7-peter.maydell@linaro.org
2018-10-08 14:55:04 +01:00
Peter Maydell
a2d12f0f34 target/arm: Add some comments in Thumb decode
Add some comments to the Thumb decoder indicating what bits
of the instruction have been decoded at various points in
the code.

This is not an exhaustive set of comments; we're gradually
adding comments as we work with particular bits of the code.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002163556.10279-6-peter.maydell@linaro.org
2018-10-08 14:55:04 +01:00
Peter Maydell
5520318939 target/arm: Add v8M stack checks on ADD/SUB/MOV of SP
Add code to insert calls to a helper function to do the stack
limit checking when we handle these forms of instruction
that write to SP:
 * ADD (SP plus immediate)
 * ADD (SP plus register)
 * SUB (SP minus immediate)
 * SUB (SP minus register)
 * MOV (register)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002163556.10279-5-peter.maydell@linaro.org
2018-10-08 14:55:04 +01:00
Peter Maydell
5529bf188d target/arm: Move v7m_using_psp() to internals.h
We're going to want v7m_using_psp() in op_helper.c in the
next patch, so move it from helper.c to internals.h.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002163556.10279-4-peter.maydell@linaro.org
2018-10-08 14:55:04 +01:00
Peter Maydell
86f026de22 target/arm: Define new EXCP type for v8M stack overflows
Define EXCP_STKOF, and arrange for it to cause us to take
a UsageFault with CFSR.STKOF set.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002163556.10279-3-peter.maydell@linaro.org
2018-10-08 14:55:04 +01:00
Peter Maydell
4730fb8503 target/arm: Define new TBFLAG for v8M stack checking
The Arm v8M architecture includes hardware stack limit checking.
When certain instructions update the stack pointer, if the new
value of SP is below the limit set in the associated limit register
then an exception is taken. Add a TB flag that tracks whether
the limit-checking code needs to be emitted.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20181002163556.10279-2-peter.maydell@linaro.org
2018-10-08 14:55:04 +01:00
Richard Henderson
500d04843b target/arm: Pass TCGMemOpIdx to sve memory helpers
There is quite a lot of code required to compute cpu_mem_index,
or even put together the full TCGMemOpIdx.  This can easily be
done at translation time.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181005175350.30752-16-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-08 14:55:03 +01:00
Richard Henderson
116347ce20 target/arm: Rewrite vector gather first-fault loads
This implements the feature for softmmu, and moves the
main loop out of a macro and into a function.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181005175350.30752-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-08 14:55:03 +01:00
Richard Henderson
78cf1b886a target/arm: Rewrite vector gather stores
This fixes the endianness problem for softmmu, and moves
the main loop out of a macro and into an inlined function.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181005175350.30752-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-08 14:55:03 +01:00
Richard Henderson
d4f75f25b4 target/arm: Rewrite vector gather loads
This fixes the endianness problem for softmmu, and moves
the main loop out of a macro and into an inlined function.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181005175350.30752-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-08 14:55:03 +01:00
Richard Henderson
28d57f2dc5 target/arm: Split contiguous stores for endianness
We can choose the endianness at translation time, rather than
re-computing it at execution time.

Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181005175350.30752-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-08 14:55:03 +01:00