When building with GCC 10.2 configured with --extra-cflags=-Os, we get:
target/arm/m_helper.c: In function ‘arm_v7m_cpu_do_interrupt’:
target/arm/m_helper.c:1811:16: error: ‘restore_s16_s31’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
1811 | if (restore_s16_s31) {
| ^
target/arm/m_helper.c:1350:10: note: ‘restore_s16_s31’ was declared here
1350 | bool restore_s16_s31;
| ^~~~~~~~~~~~~~~
cc1: all warnings being treated as errors
Initialize the 'restore_s16_s31' variable to silence the warning.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20210119062739.589049-1-f4bug@amsat.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Update all users of do_perm_pred3 for the new
predicate descriptor field definitions.
Cc: qemu-stable@nongnu.org
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210113062650.593824-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These two were odd, in that do_pfirst_pnext passed the
count of 64-bit words rather than bytes. Change to pass
the standard pred_full_reg_size to avoid confusion.
Cc: qemu-stable@nongnu.org
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210113062650.593824-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
SVE predicate operations cannot use the "usual" simd_desc
encoding, because the lengths are not a multiple of 8.
But we were abusing the SIMD_* fields to store values anyway.
This abuse broke when SIMD_OPRSZ_BITS was modified in e2e7168a21.
Introduce a new set of field definitions for exclusive use
of predicates, so that it is obvious what kind of predicate
we are manipulating. To be used in future patches.
Cc: qemu-stable@nongnu.org
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210113062650.593824-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This adds handling for the SCR_EL3.EEL2 bit.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Message-id: 20210112104511.36576-17-remi.denis.courmont@huawei.com
[PMM: Applied fixes for review issues noted by RTH:
- check for FEATURE_AARCH64 before checking sel2 isar feature
- correct the commit message subject line]
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
On ARMv8-A, accesses by 32-bit secure EL1 to monitor registers trap to
the upper (64-bit) EL. With Secure EL2 support, we can no longer assume
that that is always EL3, so make room for the value to be computed at
run-time.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-16-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The stage_1_mmu_idx() already effectively keeps track of which
translation regimes have two stages. Don't hard-code another test.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-13-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In the secure stage 2 translation regime, the VSTCR.SW and VTCR.NSW
bits can invert the secure flag for pagetable walks. This patchset
allows S1_ptw_translate() to change the non-secure bit.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-11-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The VTTBR write callback so far assumes that the underlying VM lies in
non-secure state. This handles the secure state scenario.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-10-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This adds the MMU indices for EL2 stage 1 in secure state.
To keep code contained, which is largelly identical between secure and
non-secure modes, the MMU indices are reassigned. The new assignments
provide a systematic pattern with a non-secure bit.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-8-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
With the ARMv8.4-SEL2 extension, EL2 is a legal exception level in
secure mode, though it can only be AArch64.
This patch adds the target EL for exceptions from 64-bit S-EL2.
It also fixes the target EL to EL2 when HCR.{A,F,I}MO are set in secure
mode. Those values were never used in practice as the effective value of
HCR was always 0 in secure mode.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-7-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-6-remi.denis.courmont@huawei.com
[PMM: tweaked commit message to match reduced scope of patch
following rebase]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This adds a common helper to compute the effective value of MDCR_EL2.
That is the actual value if EL2 is enabled in the current security
context, or 0 elsewise.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-5-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This will simplify accessing HCR conditionally in secure state.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-4-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Do not assume that EL2 is available in and only in non-secure context.
That equivalence is broken by ARMv8.4-SEL2.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-3-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This checks if EL2 is enabled (meaning EL2 registers take effects) in
the current security context.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-2-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In this context, the HCR value is the effective value, and thus is
zero in secure mode. The tests for HCR.{F,I}MO are sufficient.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-1-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The interface for object_property_add_bool is simpler,
making the code easier to understand.
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210111235740.462469-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The crypto overhead of emulating pauth can be significant for
some workloads. Add two boolean properties that allows the
feature to be turned off, on with the architected algorithm,
or on with an implementation defined algorithm.
We need two intermediate booleans to control the state while
parsing properties lest we clobber ID_AA64ISAR1 into an invalid
intermediate state.
Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210111235740.462469-3-richard.henderson@linaro.org
[PMM: fixed docs typo, tweaked text to clarify that the impdef
algorithm is specific to QEMU]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Without hardware acceleration, a cryptographically strong
algorithm is too expensive for pauth_computepac.
Even with hardware accel, we are not currently expecting
to link the linux-user binaries to any crypto libraries,
and doing so would generally make the --static build fail.
So choose XXH64 as a reasonably quick and decent hash.
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210111235740.462469-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The public API is now defined in
hw/semihosting/common-semi.h. do_common_semihosting takes CPUState *
instead of CPUARMState *. All internal functions have been renamed
common_semi_ instead of arm_semi_ or arm_. Aside from the API change,
there are no functional changes in this patch.
Signed-off-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20210107170717.2098982-3-keithp@keithp.com>
Message-Id: <20210108224256.2321-14-alex.bennee@linaro.org>
This commit renames two files which provide ARM semihosting support so
that they can be shared by other architectures:
1. target/arm/arm-semi.c -> hw/semihosting/common-semi.c
2. linux-user/arm/semihost.c -> linux-user/semihost.c
The build system was modified use a new config variable,
CONFIG_ARM_COMPATIBLE_SEMIHOSTING, which has been added to the ARM
softmmu and linux-user default configs. The contents of the source
files has not been changed in this patch.
Signed-off-by: Keith Packard <keithp@keithp.com>
[AJB: rename arm-compat-semi, select SEMIHOSTING]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20210107170717.2098982-2-keithp@keithp.com>
Message-Id: <20210108224256.2321-13-alex.bennee@linaro.org>
While GDB can work with any XML description given to it there is
special handling for SVE registers on the GDB side which makes the
users life a little better. The changes aren't that major and all the
registers save the $vg reported the same. All that changes is:
- report org.gnu.gdb.aarch64.sve
- use gdb nomenclature for names and types
- minor re-ordering of the types to match reference
- re-enable ieee_half (as we know gdb supports it now)
- $vg is now a 64 bit int
- check $vN and $zN aliasing in test
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Luis Machado <luis.machado@linaro.org>
Message-Id: <20210108224256.2321-11-alex.bennee@linaro.org>
gdb_exit() has never needed anything from env and I doubt we are going
to start now.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210108224256.2321-8-alex.bennee@linaro.org>
In commit cd8be50e58 we converted the A32 coprocessor
insns to decodetree. This accidentally broke XScale/iWMMXt insns,
because it moved the handling of "cp insns which are handled
by looking up the cp register in the hashtable" from after the
call to the legacy disas_xscale_insn() decode to before it,
with the result that all XScale/iWMMXt insns now UNDEF.
Update valid_cp() so that it knows that on XScale cp 0 and 1
are not standard coprocessor instructions; this will cause
the decodetree trans_ functions to ignore them, so that
execution will correctly get through to the legacy decode again.
Cc: qemu-stable@nongnu.org
Reported-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Message-id: 20210108195157.32067-1-peter.maydell@linaro.org
When FEAT_MTE is implemented, the AArch64 view of CTR_EL0 adds the
TminLine field in bits [37:32].
Extend the ctr field to be able to hold this context.
Signed-off-by: Leif Lindholm <leif@nuviainc.com>
Reviewed-by: Hao Wu <wuhaotsh@google.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20210108185154.8108-4-leif@nuviainc.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The AArch64 view of CLIDR_EL1 extends the ICB field to include also bit
32, as well as adding a Ttype<n> field when FEAT_MTE is implemented.
Extend the clidr field to be able to hold this context.
Signed-off-by: Leif Lindholm <leif@nuviainc.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20210108185154.8108-3-leif@nuviainc.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This adds for the Small Translation tables extension in AArch64 state.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The Arm CPU finalize function uses a sequence of timer_del(), timer_deinit(),
timer_free() to free the timer. The timer_deinit() step in this was always
unnecessary, and now the timer_del() is implied by timer_free(), so we can
collapse this down to simply calling timer_free().
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201215154107.3255-5-peter.maydell@linaro.org
Now that we have implemented all the features needed by the v8.1M
architecture, we can add the model of the Cortex-M55. This is the
configuration without MVE support; we'll add MVE later.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201210201433.26262-5-peter.maydell@linaro.org
Implement the v8.1M FPCXT_NS floating-point system register. This is
a little more complicated than FPCXT_S, because it has specific
handling for "current FP state is inactive", and it only wants to do
PreserveFPState(), not the full set of actions done by
ExecuteFPCheck() which vfp_access_check() implements.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201210201433.26262-4-peter.maydell@linaro.org
In commit 64f863baee we implemented the v8.1M FPCXT_S register,
but we got the write behaviour wrong. On read, this register reads
bits [27:0] of FPSCR plus the CONTROL.SFPA bit. On write, it doesn't
just write back those bits -- it writes a value to the whole FPSCR,
whose upper 4 bits are zeroes.
We also incorrectly implemented the write-to-FPSCR as a simple store
to vfp.xregs; this skips the "update the softfloat flags" part of
the vfp_set_fpscr helper so the value would read back correctly but
not actually take effect.
Fix both of these things by doing a complete write to the FPSCR
using the helper function.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201210201433.26262-3-peter.maydell@linaro.org
In 50244cc76a we updated mte_check_fail to match the ARM
pseudocode, using the correct EL to select the TCF field.
But we failed to update MTE0_ACTIVE the same way, which led
to g_assert_not_reached().
Cc: qemu-stable@nongnu.org
Buglink: https://bugs.launchpad.net/bugs/1907137
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201221204426.88514-1-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There is nothing within the translators that ought to be
changing the TranslationBlock data, so make it const.
This does not actually use the read-only copy of the
data structure that exists within the rx region.
Reviewed-by: Joelle van Dyne <j@getutm.app>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
There is nothing within the translators that ought to be
changing the TranslationBlock data, so make it const.
This does not actually use the read-only copy of the
data structure that exists within the rx region.
Reviewed-by: Joelle van Dyne <j@getutm.app>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Commit 8118f0950f "migration: Append JSON description of migration
stream" needs a JSON writer. The existing qobject_to_json() wasn't a
good fit, because it requires building a QObject to convert. Instead,
migration got its very own JSON writer, in commit 190c882ce2 "QJSON:
Add JSON writer". It tacitly limits numbers to int64_t, and strings
contents to characters that don't need escaping, unlike
qobject_to_json().
The previous commit factored the JSON writer out of qobject_to_json().
Replace migration's JSON writer by it.
Cc: Juan Quintela <quintela@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20201211171152.146877-17-armbru@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Anywhere we create a list of just one item or by prepending items
(typically because order doesn't matter), we can use
QAPI_LIST_PREPEND(). But places where we must keep the list in order
by appending remain open-coded until later patches.
Note that as a side effect, this also performs a cleanup of two minor
issues in qga/commands-posix.c: the old code was performing
new = g_malloc0(sizeof(*ret));
which 1) is confusing because you have to verify whether 'new' and
'ret' are variables with the same type, and 2) would conflict with C++
compilation (not an actual problem for this file, but makes
copy-and-paste harder).
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20201113011340.463563-5-eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
[Straightforward conflicts due to commit a8aa94b5f8 "qga: update
schema for guest-get-disks 'dependents' field" and commit a10b453a52
"target/mips: Move mips_cpu_add_definition() from helper.c to cpu.c"
resolved. Commit message tweaked.]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Class properties make QOM introspection simpler and easier, as
they don't require an object to be instantiated.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20201111183823.283752-8-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>