Improvement in comments for the instantiation functions.
This is to highlight what each cpu class, in the 68000 series, contains
in terms of instructions/features.
Signed-off-by: Lucien Murray-Pitts <lucienmp.qemu@gmail.com>
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Message-Id: <2dfe32672ee6ddce4b54c6bcfce579d35abeaf51.1612137712.git.balaton@eik.bme.hu>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
When working with performance monitoring counters, we look at
MDCR_EL2.HPMN as part of the check whether a counter is enabled. This
check fails, because MDCR_EL2.HPMN is reset to 0, meaning that no
counters are "enabled" for < EL2.
That's in violation of the Arm specification, which states that
> On a Warm reset, this field [MDCR_EL2.HPMN] resets to the value in
> PMCR_EL0.N
That's also what a comment in the code acknowledges, but the necessary
adjustment seems to have been forgotten when support for more counters
was added.
This change fixes the issue by setting the reset value to PMCR.N, which
is four.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
cpsr has been treated as being the same as spsr, but it isn't.
Since PSTATE_SS isn't in cpsr, remove it and move it into env->pstate.
This allows us to add support for CPSR_DIT, adding helper functions
to merge SPSR_ELx to and from CPSR.
Signed-off-by: Rebecca Cran <rebecca@nuviainc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210208065700.19454-3-rebecca@nuviainc.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add support for FEAT_DIT. DIT (Data Independent Timing) is a required
feature for ARMv8.4. Since virtual machine execution is largely
nondeterministic and TCG is outside of the security domain, it's
implemented as a NOP.
Signed-off-by: Rebecca Cran <rebecca@nuviainc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210208065700.19454-2-rebecca@nuviainc.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The FW and AW bits of SCR_EL3 are RES1 only in some contexts. Force them
to 1 only when there is no support for AArch32 at EL1 or above.
The reset value will be 0x30 only if the CPU is AArch64-only; if there
is support for AArch32 at EL1 or above, it will be reset to 0.
Also adds helper function isar_feature_aa64_aa32_el1 to check if AArch32
is supported at EL1 or above.
Signed-off-by: Mike Nawrocki <michael.nawrocki@gtri.gatech.edu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210203165552.16306-2-michael.nawrocki@gtri.gatech.edu
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
As feature flags are added or removed, the meanings of bits in the
`features` field can change between QEMU versions, causing migration
failures. Additionally, migrating the field is not useful because it is
a constant function of the CPU being used.
Fixes: LP:1914696
Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com>
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Tested-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Per EREF 2.0 [1] chapter 3.11.2:
The following bits in L2CSR0 (exists in the e500mc/e5500/e6500 core):
- L2FI (L2 cache flash invalidate)
- L2FL (L2 cache flush)
- L2LFC (L2 cache lock flash clear)
when set, a cache operation is initiated by hardware, and these bits
will be cleared when the operation is complete.
Since we don't model cache in QEMU, let's add a write helper to emulate
the cache operations completing instantly.
[1] https://www.nxp.com/files-static/32bit/doc/ref_manual/EREFRM.pdf
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Message-Id: <1612925152-20913-1-git-send-email-bmeng.cn@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Remove these confusing and unused definitions.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210127232401.3525126-1-f4bug@amsat.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Expose the VMX exit/entry load pkrs control bits in
VMX_TRUE_EXIT_CTLS/VMX_TRUE_ENTRY_CTLS MSRs to guest, which supports the
PKS in nested VM.
Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Message-Id: <20210205083325.13880-3-chenyi.qiang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
PKS introduces MSR IA32_PKRS(0x6e1) to manage the supervisor protection
key rights. Page access and writes can be managed via the MSR update
without TLB flushes when permissions change.
Add the support to save/load IA32_PKRS MSR in guest.
Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Message-Id: <20210205083325.13880-2-chenyi.qiang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Protection Keys for Supervisor-mode pages is a simple extension of
the PKU feature that QEMU already implements. For supervisor-mode
pages, protection key restrictions come from a new MSR. The MSR
has no XSAVE state associated to it.
PKS is only respected in long mode. However, in principle it is
possible to set the MSR even outside long mode, and in fact
even the XSAVE state for PKRU could be set outside long mode
using XRSTOR. So do not limit the migration subsections for
PKRU and PKRS to long mode.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch fixes a translation bug for a subset of x86 BMI instructions
such as the following:
c4 e2 f9 f7 c0 shlxq %rax, %rax, %rax
Currently, these incorrectly generate an undefined instruction exception
when SSE is disabled via CR4, while instructions like "shrxq" work fine.
The problem appears to be related to BMI instructions encoded using VEX
and with a mandatory prefix of "0x66" (data). Instructions with this
data prefix (such as shlxq) are currently rejected. Instructions with
other mandatory prefixes (such as shrxq) translate as expected.
This patch removes the incorrect check in "gen_sse" that causes the
exception to be generated. For the non-BMI cases, the check is
redundant: prefixes are already checked at line 3696.
Buglink: https://bugs.launchpad.net/qemu/+bug/1748296
Signed-off-by: David Greenaway <dgreenaway@google.com>
Message-Id: <20210114063958.1508050-1-dgreenaway@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Newer AMD CPUs will add CPUID_0x8000000A_EDX[28] bit, which indicates
that SVM instructions (VMRUN/VMSAVE/VMLOAD) will trigger #VMEXIT before
CPU checking their EAX against reserved memory regions. This change will
allow the hypervisor to avoid intercepting #GP and emulating SVM
instructions. KVM turns on this CPUID bit for nested VMs. In order to
support it, let us populate this bit, along with other SVM feature bits,
in FEAT_SVM.
Signed-off-by: Wei Huang <wei.huang2@amd.com>
Message-Id: <20210126202456.589932-1-wei.huang2@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
32-bit targets by definition do not support long mode; therefore, the
bit must be masked in the features supported by the accelerator.
As a side effect, this avoids setting up the 0x80000008 CPUID leaf
for
qemu-system-i386 -cpu host
which since commit 5a140b255d ("x86/cpu: Use max host physical address
if -cpu max option is applied") would have printed this error:
qemu-system-i386: phys-bits should be between 32 and 36 (but is 48)
Reported-by: Nathan Chancellor <natechancellor@gmail.com>
Tested-by: Nathan Chancellor <natechancellor@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Some upcoming POWER machines have a system called PEF (Protected
Execution Facility) which uses a small ultravisor to allow guests to
run in a way that they can't be eavesdropped by the hypervisor. The
effect is roughly similar to AMD SEV, although the mechanisms are
quite different.
Most of the work of this is done between the guest, KVM and the
ultravisor, with little need for involvement by qemu. However qemu
does need to tell KVM to allow secure VMs.
Because the availability of secure mode is a guest visible difference
which depends on having the right hardware and firmware, we don't
enable this by default. In order to run a secure guest you need to
create a "pef-guest" object and set the confidential-guest-support
property to point to it.
Note that this just *allows* secure guests, the architecture of PEF is
such that the guest still needs to talk to the ultravisor to enter
secure mode. Qemu has no direct way of knowing if the guest is in
secure mode, and certainly can't know until well after machine
creation time.
To start a PEF-capable guest, use the command line options:
-object pef-guest,id=pef0 -machine confidential-guest-support=pef0
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
While we've abstracted some (potential) differences between mechanisms for
securing guest memory, the initialization is still specific to SEV. Given
that, move it into x86's kvm_arch_init() code, rather than the generic
kvm_init() code.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
The platform specific details of mechanisms for implementing
confidential guest support may require setup at various points during
initialization. Thus, it's not really feasible to have a single cgs
initialization hook, but instead each mechanism needs its own
initialization calls in arch or machine specific code.
However, to make it harder to have a bug where a mechanism isn't
properly initialized under some circumstances, we want to have a
common place, late in boot, where we verify that cgs has been
initialized if it was requested.
This patch introduces a ready flag to the ConfidentialGuestSupport
base type to accomplish this, which we verify in
qemu_machine_creation_done().
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
This allows failures to be reported richly and idiomatically.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Currently the "memory-encryption" property is only looked at once we
get to kvm_init(). Although protection of guest memory from the
hypervisor isn't something that could really ever work with TCG, it's
not conceptually tied to the KVM accelerator.
In addition, the way the string property is resolved to an object is
almost identical to how a QOM link property is handled.
So, create a new "confidential-guest-support" link property which sets
this QOM interface link directly in the machine. For compatibility we
keep the "memory-encryption" property, but now implemented in terms of
the new property.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
When AMD's SEV memory encryption is in use, flash memory banks (which are
initialed by pc_system_flash_map()) need to be encrypted with the guest's
key, so that the guest can read them.
That's abstracted via the kvm_memcrypt_encrypt_data() callback in the KVM
state.. except, that it doesn't really abstract much at all.
For starters, the only call site is in code specific to the 'pc'
family of machine types, so it's obviously specific to those and to
x86 to begin with. But it makes a bunch of further assumptions that
need not be true about an arbitrary confidential guest system based on
memory encryption, let alone one based on other mechanisms:
* it assumes that the flash memory is defined to be encrypted with the
guest key, rather than being shared with hypervisor
* it assumes that that hypervisor has some mechanism to encrypt data into
the guest, even though it can't decrypt it out, since that's the whole
point
* the interface assumes that this encrypt can be done in place, which
implies that the hypervisor can write into a confidential guests's
memory, even if what it writes isn't meaningful
So really, this "abstraction" is actually pretty specific to the way SEV
works. So, this patch removes it and instead has the PC flash
initialization code call into a SEV specific callback.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Several architectures have mechanisms which are designed to protect
guest memory from interference or eavesdropping by a compromised
hypervisor. AMD SEV does this with in-chip memory encryption and
Intel's TDX can do similar things. POWER's Protected Execution
Framework (PEF) accomplishes a similar goal using an ultravisor and
new memory protection features, instead of encryption.
To (partially) unify handling for these, this introduces a new
ConfidentialGuestSupport QOM base class. "Confidential" is kind of vague,
but "confidential computing" seems to be the buzzword about these schemes,
and "secure" or "protected" are often used in connection to unrelated
things (such as hypervisor-from-guest or guest-from-guest security).
The "support" in the name is significant because in at least some of the
cases it requires the guest to take specific actions in order to protect
itself from hypervisor eavesdropping.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This will allow us to centralize the registration of
the cpus.c module accelerator operations (in accel/accel-softmmu.c),
and trigger it automatically using object hierarchy lookup from the
new accel_init_interfaces() initialization step, depending just on
which accelerators are available in the code.
Rename all tcg-cpus.c, kvm-cpus.c, etc to tcg-accel-ops.c,
kvm-accel-ops.c, etc, matching the object type names.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Message-Id: <20210204163931.7358-18-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
we cannot in principle make the TCG Operations field definitions
conditional on CONFIG_TCG in code that is included by both common_ss
and specific_ss modules.
Therefore, what we can do safely to restrict the TCG fields to TCG-only
builds, is to move all tcg cpu operations into a separate header file,
which is only included by TCG, target-specific code.
This leaves just a NULL pointer in the cpu.h for the non-TCG builds.
This also tidies up the code in all targets a bit, having all TCG cpu
operations neatly contained by a dedicated data struct.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Message-Id: <20210204163931.7358-16-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
commit 568496c0c0 ("cpu: Add callback to check architectural") and
commit 3826121d92 ("target-arm: Implement checking of fired")
introduced an ARM-specific hack for cpu_check_watchpoint.
Make debug_check_watchpoint optional, and move it to tcg_ops.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20210204163931.7358-15-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
commit 4061200059 ("arm: Correctly handle watchpoints for BE32 CPUs")
introduced this ARM-specific, TCG-specific hack to adjust the address,
before checking it with cpu_check_watchpoint.
Make adjust_watchpoint_address optional and move it to tcg_ops.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20210204163931.7358-14-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
make it consistently SOFTMMU-only.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
[claudio: make the field presence in cpu.h unconditional, removing the ifdefs]
Message-Id: <20210204163931.7358-12-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
[claudio: wrap target code around CONFIG_TCG and !CONFIG_USER_ONLY]
avoiding its use in headers used by common_ss code (should be poisoned).
Note: need to be careful with the use of CONFIG_USER_ONLY,
Message-Id: <20210204163931.7358-11-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
cc->do_interrupt is in theory a TCG callback used in accel/tcg only,
to prepare the emulated architecture to take an interrupt as defined
in the hardware specifications,
but in reality the _do_interrupt style of functions in targets are
also occasionally reused by KVM to prepare the architecture state in a
similar way where userspace code has identified that it needs to
deliver an exception to the guest.
In the case of ARM, that includes:
1) the vcpu thread got a SIGBUS indicating a memory error,
and we need to deliver a Synchronous External Abort to the guest to
let it know about the error.
2) the kernel told us about a debug exception (breakpoint, watchpoint)
but it is not for one of QEMU's own gdbstub breakpoints/watchpoints
so it must be a breakpoint the guest itself has set up, therefore
we need to deliver it to the guest.
So in order to reuse code, the same arm_do_interrupt function is used.
This is all fine, but we need to avoid calling it using the callback
registered in CPUClass, since that one is now TCG-only.
Fortunately this is easily solved by replacing calls to
CPUClass::do_interrupt() with explicit calls to arm_do_interrupt().
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Cc: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20210204163931.7358-9-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
for now only TCG is allowed as an accelerator for riscv,
so remove the CONFIG_TCG use.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20210204163931.7358-3-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The TCG-specific CPU methods will be moved to a separate struct,
to make it easier to move accel-specific code outside generic CPU
code in the future. Start by moving tcg_initialize().
The new CPUClass.tcg_opts field may eventually become a pointer,
but keep it an embedded struct for now, to make code conversion
easier.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
[claudio: move TCGCpuOperations inside include/hw/core/cpu.h]
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20210204163931.7358-2-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Only define the register if it exists for the cpu.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210120031656.737646-1-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This was defined at some point before ARMv8.4, and will
shortly be used by new processor descriptions.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210120204400.1056582-1-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
gcc (Debian 10.2.1-6) 10.2.1 20210110 aborts builds with enabled sanitizers:
../../../target/rx/op_helper.c: In function ‘helper_scmpu’:
../../../target/rx/op_helper.c:213:24: error: ‘tmp1’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
213 | env->psw_c = (tmp0 >= tmp1);
| ~~~~~~^~~~~~~~
../../../target/rx/op_helper.c:213:24: error: ‘tmp0’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
../../../target/rx/op_helper.c: In function ‘helper_suntil’:
../../../target/rx/op_helper.c:299:23: error: ‘tmp’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
299 | env->psw_c = (tmp <= env->regs[2]);
| ~~~~~^~~~~~~~~~~~~~~~
../../../target/rx/op_helper.c: In function ‘helper_swhile’:
../../../target/rx/op_helper.c:318:23: error: ‘tmp’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
318 | env->psw_c = (tmp <= env->regs[2]);
| ~~~~~^~~~~~~~~~~~~~~~
Rewriting the code fixes those errors.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210128172127.46041-1-sw@weilnetz.de>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
The easiest spots to use QAPI_LIST_APPEND are where we already have an
obvious pointer to the tail of a list. While at it, consistently use
the variable name 'tail' for that purpose.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20210113221013.390592-5-eblake@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Using the cfg.use_non_secure bitfield and the MMU access type, we can determine
if the access should be secure or not.
Signed-off-by: Joe Komlodi <komlodi@xilinx.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-Id: <1611274735-303873-4-git-send-email-komlodi@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Using MMUAccessType makes it more clear what the variable's use is.
No functional change.
Signed-off-by: Joe Komlodi <komlodi@xilinx.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-Id: <1611274735-303873-3-git-send-email-komlodi@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
This property is used to control the security of the following interfaces
on MicroBlaze:
M_AXI_DP - data interface
M_AXI_IP - instruction interface
M_AXI_DC - dcache interface
M_AXI_IC - icache interface
It works by enabling or disabling the use of the non_secure[3:0] signals.
Interfaces and their corresponding values are taken from:
https://www.xilinx.com/support/documentation/sw_manuals/xilinx2020_2/ug984-vivado-microblaze-ref.pdf
page 153.
Signed-off-by: Joe Komlodi <komlodi@xilinx.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-Id: <1611274735-303873-2-git-send-email-komlodi@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
QEMU option -cpu max(max_features) means "Enables all features supported by
the accelerator in the current host", this looks true for all the features
except guest max physical address width, so add this patch to enable it.
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
Message-Id: <20210113090430.26394-1-weijiang.yang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Using strncpy with length equal to the size of target array, GCC 11
reports following warning:
warning: '__builtin_strncpy' specified bound 256 equals destination size [-Wstringop-truncation]
We can prevent this warning by using strpadcpy that copies string
up to specified length, zeroes target array after copied string
and does not raise warning when length is equal to target array
size (and ending '\0' is discarded).
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <6f86915755219cf6a671788075da4809b57f7d7b.1610607906.git.mrezanin@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
In our EXECUTE fast path, we have to ignore the content of r0, if
specified by b1 or b2.
Fixes: d376f123c7 ("target/s390x: Re-implement a few EXECUTE target insns directly")
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Tested-by: Nick Desaulniers <ndesaulniers@google.com>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20210111163845.18148-6-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Using get_address() with register identifiers comming from an "r" field
is wrong: if the "r" field designates "r0", we don't read the content
and instead assume 0 - which should only be applied when the register
was specified via "b" or "x".
PoP 5-11 "Operand-Address Generation":
"A zero in any of the B1, B2, X2, B3, or B4 fields indicates the absence
of the corresponding address component. For the absent component, a zero
is used in forming the intermediate sum, regardless of the contents of
general register 0. A displacement of zero has no special significance."
This BUG became visible for CSPG as generated by LLVM-12 in the upstream
Linux kernel (v5.11-rc2), used while creating the linear mapping in
vmem_map_init(): Trying to store to address 0 results in a Low Address
Protection exception.
Debugging this was more complicated than it could have been: The program
interrupt handler in the kernel will try to crash the kernel: doing so, it
will enable DAT. As the linear mapping is not created yet (asce=0), we run
into an addressing exception while tring to walk non-existant DAT tables,
resulting in a program exception loop.
This allows for booting upstream Linux kernels compiled by clang-12. Most
of these cases seem to be broken forever.
Reported-by: Nick Desaulniers <ndesaulniers@google.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Nick Desaulniers <ndesaulniers@google.com>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Cc: Guenter Roeck <linux@roeck-us.net>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20210111163845.18148-4-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
RISBHG is broken and currently hinders clang-11 builds of upstream kernels
from booting: the kernel crashes early, while decompressing the image.
[...]
Kernel fault: interruption code 0005 ilc:2
Kernel random base: 0000000000000000
PSW : 0000200180000000 0000000000017a1e
R:0 T:0 IO:0 EX:0 Key:0 M:0 W:0 P:0 AS:0 CC:2 PM:0 RI:0 EA:3
GPRS: 0000000000000001 0000000c00000000 00000003fffffff4 00000000fffffff0
0000000000000000 00000000fffffff4 000000000000000c 00000000fffffff0
00000000fffffffc 0000000000000000 00000000fffffff8 00000000008e25a8
0000000000000009 0000000000000002 0000000000000008 000000000000bce0
One example of a buggy instruction is:
17dde: ec 1e 00 9f 20 5d risbhg %r1,%r14,0,159,32
With %r14 = 0x9 and %r1 = 0x7 should result in %r1 = 0x900000007, however,
results in %r1 = 0.
Let's interpret values of i3/i4 as documented in the PoP and make
computation of "mask" only based on i3 and i4 and use "pmask" only at the
very end to make sure wrapping is only applied to the high/low doubleword.
With this patch, I can successfully boot a v5.11-rc2 kernel built with
clang-11, and gcc builds keep on working.
Fixes: 2d6a869833 ("target-s390: Implement RISBG")
Reported-by: Nick Desaulniers <ndesaulniers@google.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Tested-by: Nick Desaulniers <ndesaulniers@google.com>
Cc: Guenter Roeck <linux@roeck-us.net>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20210111163845.18148-3-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Looks like something went wrong whiel touching that line. Instead of "r1"
we need a new temporary. Also, we have to pass MO_TEQ, to indicate that
we are working with 64-bit values. Let's revert these changes.
Fixes: ff26d287bd ("target/s390x: Improve cc computation for ADD LOGICAL")
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Nick Desaulniers <ndesaulniers@google.com>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20210111163845.18148-2-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
When building with GCC 10.2 configured with --extra-cflags=-Os, we get:
target/arm/m_helper.c: In function ‘arm_v7m_cpu_do_interrupt’:
target/arm/m_helper.c:1811:16: error: ‘restore_s16_s31’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
1811 | if (restore_s16_s31) {
| ^
target/arm/m_helper.c:1350:10: note: ‘restore_s16_s31’ was declared here
1350 | bool restore_s16_s31;
| ^~~~~~~~~~~~~~~
cc1: all warnings being treated as errors
Initialize the 'restore_s16_s31' variable to silence the warning.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20210119062739.589049-1-f4bug@amsat.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Update all users of do_perm_pred3 for the new
predicate descriptor field definitions.
Cc: qemu-stable@nongnu.org
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210113062650.593824-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These two were odd, in that do_pfirst_pnext passed the
count of 64-bit words rather than bytes. Change to pass
the standard pred_full_reg_size to avoid confusion.
Cc: qemu-stable@nongnu.org
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210113062650.593824-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
SVE predicate operations cannot use the "usual" simd_desc
encoding, because the lengths are not a multiple of 8.
But we were abusing the SIMD_* fields to store values anyway.
This abuse broke when SIMD_OPRSZ_BITS was modified in e2e7168a21.
Introduce a new set of field definitions for exclusive use
of predicates, so that it is obvious what kind of predicate
we are manipulating. To be used in future patches.
Cc: qemu-stable@nongnu.org
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210113062650.593824-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This adds handling for the SCR_EL3.EEL2 bit.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Message-id: 20210112104511.36576-17-remi.denis.courmont@huawei.com
[PMM: Applied fixes for review issues noted by RTH:
- check for FEATURE_AARCH64 before checking sel2 isar feature
- correct the commit message subject line]
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
On ARMv8-A, accesses by 32-bit secure EL1 to monitor registers trap to
the upper (64-bit) EL. With Secure EL2 support, we can no longer assume
that that is always EL3, so make room for the value to be computed at
run-time.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-16-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The stage_1_mmu_idx() already effectively keeps track of which
translation regimes have two stages. Don't hard-code another test.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-13-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In the secure stage 2 translation regime, the VSTCR.SW and VTCR.NSW
bits can invert the secure flag for pagetable walks. This patchset
allows S1_ptw_translate() to change the non-secure bit.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-11-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The VTTBR write callback so far assumes that the underlying VM lies in
non-secure state. This handles the secure state scenario.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-10-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This adds the MMU indices for EL2 stage 1 in secure state.
To keep code contained, which is largelly identical between secure and
non-secure modes, the MMU indices are reassigned. The new assignments
provide a systematic pattern with a non-secure bit.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-8-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
With the ARMv8.4-SEL2 extension, EL2 is a legal exception level in
secure mode, though it can only be AArch64.
This patch adds the target EL for exceptions from 64-bit S-EL2.
It also fixes the target EL to EL2 when HCR.{A,F,I}MO are set in secure
mode. Those values were never used in practice as the effective value of
HCR was always 0 in secure mode.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-7-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-6-remi.denis.courmont@huawei.com
[PMM: tweaked commit message to match reduced scope of patch
following rebase]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This adds a common helper to compute the effective value of MDCR_EL2.
That is the actual value if EL2 is enabled in the current security
context, or 0 elsewise.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-5-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This will simplify accessing HCR conditionally in secure state.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-4-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Do not assume that EL2 is available in and only in non-secure context.
That equivalence is broken by ARMv8.4-SEL2.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-3-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This checks if EL2 is enabled (meaning EL2 registers take effects) in
the current security context.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-2-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In this context, the HCR value is the effective value, and thus is
zero in secure mode. The tests for HCR.{F,I}MO are sufficient.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210112104511.36576-1-remi.denis.courmont@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The interface for object_property_add_bool is simpler,
making the code easier to understand.
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210111235740.462469-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The crypto overhead of emulating pauth can be significant for
some workloads. Add two boolean properties that allows the
feature to be turned off, on with the architected algorithm,
or on with an implementation defined algorithm.
We need two intermediate booleans to control the state while
parsing properties lest we clobber ID_AA64ISAR1 into an invalid
intermediate state.
Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210111235740.462469-3-richard.henderson@linaro.org
[PMM: fixed docs typo, tweaked text to clarify that the impdef
algorithm is specific to QEMU]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Without hardware acceleration, a cryptographically strong
algorithm is too expensive for pauth_computepac.
Even with hardware accel, we are not currently expecting
to link the linux-user binaries to any crypto libraries,
and doing so would generally make the --static build fail.
So choose XXH64 as a reasonably quick and decent hash.
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210111235740.462469-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
- clean-ups to docker images
- drop duplicate jobs from shippable
- prettier tag generation (+gtags)
- generate browsable source tree
- more Travis->GitLab migrations
- fix checkpatch to deal with commits
- gate gdbstub tests on 8.3.1, expand tests
- support Xfer:auxv:read gdb packet
- better gdbstub cleanup
- use GDB's SVE register layout
- make arm-compat-semihosting common
- add riscv semihosting support
- add HEAPINFO, ELAPSED, TICKFREQ, TMPNAM and ISERROR to semihosting
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEEZoWumedRZ7yvyN81+9DbCVqeKkQFAmAFXkcACgkQ+9DbCVqe
KkSP7Af/YNU4dWFf/N9CwvKQTSoJmBrO77HXccOJyYDS62hA8eoh83HWNll+xMV7
GxJDwQs0GS8J3oqcq1DktGgTUkCNxUfbHROjI2YXfRzoWnl0PFHY+Z/qRsq+bRhX
C3CiNCS/nM/NW2Q+H6TAD1MnXkia11+hqFhXrBRKVDON83MSvm0AspS5RO5eVpxo
TUTOD1YND+tAPWi5xAN+NyDuvfoY3tG4S4/DFUrHQfpS7uaHY/4qe8gMmJczveeo
uzJln9M7+pV5cgUWwr1fgCkbSyGgra+KX3GNoLIGS34C88cKRXAp7ZF19A3wQpiy
LXljmOinLfKuJqeRGwcnt6f8GrTn7A==
=XR0h
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/stsquad/tags/pull-testing-and-misc-180121-2' into staging
Testing, gdbstub and semihosting patches:
- clean-ups to docker images
- drop duplicate jobs from shippable
- prettier tag generation (+gtags)
- generate browsable source tree
- more Travis->GitLab migrations
- fix checkpatch to deal with commits
- gate gdbstub tests on 8.3.1, expand tests
- support Xfer:auxv:read gdb packet
- better gdbstub cleanup
- use GDB's SVE register layout
- make arm-compat-semihosting common
- add riscv semihosting support
- add HEAPINFO, ELAPSED, TICKFREQ, TMPNAM and ISERROR to semihosting
# gpg: Signature made Mon 18 Jan 2021 10:09:11 GMT
# gpg: using RSA key 6685AE99E75167BCAFC8DF35FBD0DB095A9E2A44
# gpg: Good signature from "Alex Bennée (Master Work Key) <alex.bennee@linaro.org>" [full]
# Primary key fingerprint: 6685 AE99 E751 67BC AFC8 DF35 FBD0 DB09 5A9E 2A44
* remotes/stsquad/tags/pull-testing-and-misc-180121-2: (30 commits)
semihosting: Implement SYS_ISERROR
semihosting: Implement SYS_TMPNAM
semihosting: Implement SYS_ELAPSED and SYS_TICKFREQ
riscv: Add semihosting support for user mode
riscv: Add semihosting support
semihosting: Support SYS_HEAPINFO when env->boot_info is not set
semihosting: Change internal common-semi interfaces to use CPUState *
semihosting: Change common-semi API to be architecture-independent
semihosting: Move ARM semihosting code to shared directories
target/arm: use official org.gnu.gdb.aarch64.sve layout for registers
gdbstub: ensure we clean-up when terminated
gdbstub: drop gdbserver_cleanup in favour of gdb_exit
gdbstub: drop CPUEnv from gdb_exit()
gdbstub: add support to Xfer:auxv:read: packet
gdbstub: implement a softmmu based test
Revert "tests/tcg/multiarch/Makefile.target: Disable run-gdbstub-sha1 test"
configure: gate our use of GDB to 8.3.1 or above
test/guest-debug: echo QEMU command as well
scripts/checkpatch.pl: fix git-show invocation to include diffstat
gitlab: migrate the minimal tools and unit tests from Travis
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
# Conflicts:
# default-configs/targets/riscv32-linux-user.mak
# default-configs/targets/riscv64-linux-user.mak
Adapt the arm semihosting support code for RISCV. This implementation
is based on the standard for RISC-V semihosting version 0.2 as
documented in
https://github.com/riscv/riscv-semihosting-spec/releases/tag/0.2
Signed-off-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20210107170717.2098982-6-keithp@keithp.com>
Message-Id: <20210108224256.2321-17-alex.bennee@linaro.org>
The public API is now defined in
hw/semihosting/common-semi.h. do_common_semihosting takes CPUState *
instead of CPUARMState *. All internal functions have been renamed
common_semi_ instead of arm_semi_ or arm_. Aside from the API change,
there are no functional changes in this patch.
Signed-off-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20210107170717.2098982-3-keithp@keithp.com>
Message-Id: <20210108224256.2321-14-alex.bennee@linaro.org>
This commit renames two files which provide ARM semihosting support so
that they can be shared by other architectures:
1. target/arm/arm-semi.c -> hw/semihosting/common-semi.c
2. linux-user/arm/semihost.c -> linux-user/semihost.c
The build system was modified use a new config variable,
CONFIG_ARM_COMPATIBLE_SEMIHOSTING, which has been added to the ARM
softmmu and linux-user default configs. The contents of the source
files has not been changed in this patch.
Signed-off-by: Keith Packard <keithp@keithp.com>
[AJB: rename arm-compat-semi, select SEMIHOSTING]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20210107170717.2098982-2-keithp@keithp.com>
Message-Id: <20210108224256.2321-13-alex.bennee@linaro.org>
While GDB can work with any XML description given to it there is
special handling for SVE registers on the GDB side which makes the
users life a little better. The changes aren't that major and all the
registers save the $vg reported the same. All that changes is:
- report org.gnu.gdb.aarch64.sve
- use gdb nomenclature for names and types
- minor re-ordering of the types to match reference
- re-enable ieee_half (as we know gdb supports it now)
- $vg is now a 64 bit int
- check $vN and $zN aliasing in test
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Luis Machado <luis.machado@linaro.org>
Message-Id: <20210108224256.2321-11-alex.bennee@linaro.org>
gdb_exit() has never needed anything from env and I doubt we are going
to start now.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210108224256.2321-8-alex.bennee@linaro.org>
At present QEMU RISC-V uses a hardcoded XML to report the feature
"org.gnu.gdb.riscv.csr" [1]. There are two major issues with the
approach being used currently:
- The XML does not specify the "regnum" field of a CSR entry, hence
consecutive numbers are used by the remote GDB client to access
CSRs. In QEMU we have to maintain a map table to convert the GDB
number to the hardware number which is error prone.
- The XML contains some CSRs that QEMU does not implement at all,
which causes an "E14" response sent to remote GDB client.
Change to generate the CSR register list dynamically, based on the
availability presented in the CSR function table. This new approach
will reflect a correct list of CSRs that QEMU actually implements.
[1] https://sourceware.org/gdb/current/onlinedocs/gdb/RISC_002dV-Features.html#RISC_002dV-Features
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210116054123.5457-2-bmeng.cn@gmail.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
In preparation to generate the CSR register list for GDB stub
dynamically, let's add the CSR name in the CSR function table.
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 1610427124-49887-3-git-send-email-bmeng.cn@gmail.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
In preparation to generate the CSR register list for GDB stub
dynamically, change csr_ops[] to non-static so that it can be
referenced externally.
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 1610427124-49887-2-git-send-email-bmeng.cn@gmail.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
As per the privilege specification, any access from S/U mode should fail
if no pmp region is configured.
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20201223192553.332508-1-atish.patra@wdc.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Target description is not currently implemented in RISC-V
architecture. Thus GDB won't set it properly when attached.
The patch implements the target description response.
Signed-off-by: Sylvain Pelissier <sylvain.pelissier@gmail.com>
Reviewed-by: Bin Meng <bin.meng@windriver.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20210106204141.14027-1-sylvain.pelissier@gmail.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Vendor specific CPU definitions are not very useful. Use the
ISA definitions instead, which are more helpful when looking
at the various CPU definitions.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210112210152.2072996-4-f4bug@amsat.org>
nanoMIPS not a CPU, but an ISA. The nanoMIPS ISA is already
defined as ISA_NANOMIPS32.
Remove this incorrect definition and update the single CPU
implementing it, the I7200.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210112210152.2072996-3-f4bug@amsat.org>
Commit 823f2897bd ("target/mips: Disable R5900 support")
removed the single CPU using the CPU_R5900 definition.
As it is unused, remove it.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210112210152.2072996-2-f4bug@amsat.org>
LL/SC opcodes have been removed from the Release 6.
Add a single decodetree entry for the opcodes, triggering
Reserved Instruction if ever used.
Remove unreachable check_insn_opc_removed() calls.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201208203704.243704-14-f4bug@amsat.org>
LLD/SCD opcodes have been removed from the Release 6.
Add a single decodetree entry for the opcodes, triggering
Reserved Instruction if ever used.
Remove unreachable check_insn_opc_removed() calls.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201208203704.243704-13-f4bug@amsat.org>
LDL/LDR/SDL/SDR opcodes have been removed from the Release 6.
Add a single decodetree entry for the opcodes, triggering
Reserved Instruction if ever used.
Remove unreachable check_insn_opc_removed() calls.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201208203704.243704-12-f4bug@amsat.org>
LWLE/LWRE/SWLE/SWRE (EVA) opcodes have been removed from
the Release 6. Add a single decodetree entry for the opcodes,
triggering Reserved Instruction if ever used.
Remove unreachable check_insn_opc_removed() calls.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201208203704.243704-11-f4bug@amsat.org>
LWL/LWR/SWL/SWR opcodes have been removed from the Release 6.
Add a single decodetree entry for the opcodes, triggering
Reserved Instruction if ever used.
Remove unreachable check_insn_opc_removed() calls.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201208203704.243704-10-f4bug@amsat.org>
CACHE/PREF opcodes have been removed from the Release 6.
Add a single decodetree entry for the opcodes, triggering
Reserved Instruction if ever used.
Remove unreachable check_insn_opc_removed() calls.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201208203704.243704-9-f4bug@amsat.org>
COP1x opcode has been removed from the Release 6.
Add a single decodetree entry for it, triggering
Reserved Instruction if ever used.
Remove unreachable check_insn_opc_removed() call.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201208203704.243704-8-f4bug@amsat.org>
Special2 opcode have been removed from the Release 6.
Add a single decodetree entry for all the opcode class,
triggering Reserved Instruction if ever used.
Remove unreachable check_insn_opc_removed() call.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201208203704.243704-7-f4bug@amsat.org>
Since we switched to decodetree-generated processing,
we can remove this now unreachable code.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201208203704.243704-6-f4bug@amsat.org>
LSA and LDSA opcodes are also available with MIPS release 6.
Introduce the decodetree config files and call the decode()
helpers in the main decode_opc() loop.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201215225757.764263-24-f4bug@amsat.org>
Add the LSA opcode to the MSA32 decodetree config, add DLSA
to a new config for the MSA64 ASE, and call decode_msa64()
in the main decode_opc() loop.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201215225757.764263-23-f4bug@amsat.org>
Extract gen_lsa() from translate.c and explode it as
gen_LSA() and gen_DLSA().
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201215225757.764263-22-f4bug@amsat.org>
Now that we can decode the MSA ASE with decode_ase_msa(),
use it and remove the previous code, now unreachable.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201215225757.764263-21-f4bug@amsat.org>
Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Introduce the 'msa32' decodetree config for the 32-bit MSA ASE.
We start by decoding:
- the branch instructions,
- all instructions based on the MSA opcode.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201215225757.764263-20-f4bug@amsat.org>
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Simplify gen_check_zero_element() by passing the TCGCond
argument along.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201215225757.764263-25-f4bug@amsat.org>
Extract 2200 lines from the huge translate.c to a new file,
'msa_translate.c'. As there are too many inter-dependencies
we don't compile it as another object yet, but keep including
it in the big translate.o. We gain in code maintainability.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201120210844.2625602-5-f4bug@amsat.org>
Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Make gen_msa() and gen_msa_branch() public declarations
so we can keep calling them once extracted from the big
translate.c in the next commit.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201215225757.764263-18-f4bug@amsat.org>
Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Keep all MSA-related code altogether.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20201120210844.2625602-4-f4bug@amsat.org>
Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
We have ~400 lines of MSA helpers in the generic op_helper.c,
move them with the other helpers in 'msa_helper.c'.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201123204448.3260804-5-f4bug@amsat.org>
Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
translate_init.c.inc mostly contains CPU definitions.
msa_reset() doesn't belong here, move it with the MSA
helpers.
One comment style is updated to avoid checkpatch.pl warning.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201215225757.764263-15-f4bug@amsat.org>
Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
In preparation of using the decodetree script, explode
gen_msa_branch() as following:
- OPC_BZ_V -> BxZ_V(EQ)
- OPC_BNZ_V -> BxZ_V(NE)
- OPC_BZ_[BHWD] -> BxZ(false)
- OPC_BNZ_[BHWD] -> BxZ(true)
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Message-Id: <20201208003702.4088927-10-f4bug@amsat.org>
The gen_msa*() methods don't use the "CPUMIPSState *env"
argument. Remove it to simplify.
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Message-Id: <20201208003702.4088927-9-f4bug@amsat.org>
The msa_wr_d[] registers are only initialized/used by MSA.
They are declared static. We want to move them to the new
'msa_translate.c' unit in few commits, without having to
declare them global (with extern).
Extract first the logic initialization of the MSA registers
from the generic initialization. We will later move this
function along with the MSA registers to the new C unit.
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Message-Id: <20201208003702.4088927-8-f4bug@amsat.org>
Commits 863f264d10 ("add msa_reset(), global msa register") and
cb269f273f ("fix multiple TCG registers covering same data")
removed the FPU scalar registers and replaced them by aliases to
the MSA vector registers.
It is not very clear to have FPU registers displayed with MSA
register names, even if MSA ASE is not present.
Instead of aliasing FPU registers to the MSA ones (even when MSA
is absent), we now alias the MSA ones to the FPU ones (only when
MSA is present).
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Message-Id: <20201208003702.4088927-7-f4bug@amsat.org>
We don't use ASE_MSA anymore (replaced by ase_msa_available()
checking MSAP bit from CP0_Config3). Remove it.
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Message-Id: <20201208003702.4088927-6-f4bug@amsat.org>
Only decode MSA opcodes if MSA is present (implemented).
Now than check_msa_access() will only be called if MSA is
present, the only way to have MIPS_HFLAG_MSA unset is if
MSA is disabled (bit CP0C5_MSAEn cleared, see previous
commit). Therefore we can remove the 'reserved instruction'
exception.
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Message-Id: <20201208003702.4088927-5-f4bug@amsat.org>
MSA presence is expressed by the MSAP bit of CP0_Config3.
We don't need to check anything else.
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Message-Id: <20201208003702.4088927-4-f4bug@amsat.org>
Call msa_reset() unconditionally, but only reset
the MSA registers if MSA is implemented.
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Message-Id: <20201208003702.4088927-3-f4bug@amsat.org>
Instead of accessing CP0_Config3 directly and checking
the 'MSA Present' bit, introduce an explicit helper,
making the code easier to read.
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Message-Id: <20201208003702.4088927-2-f4bug@amsat.org>
To allow compiling 64-bit specific translation code more
generically (and removing #ifdef'ry), allow compiling
check_mips_64() on 32-bit targets.
If ever called on 32-bit, we obviously emit a reserved
instruction exception.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Message-Id: <20201215225757.764263-3-f4bug@amsat.org>
As we will slowly move to decodetree generated decoders,
extract the legacy decoding from decode_opc(), so new
decoders are added in decode_opc() while old code is
removed from decode_opc_legacy().
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201215225757.764263-2-f4bug@amsat.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201206233949.3783184-20-f4bug@amsat.org>
Extract FPU specific definitions that can be used by
ISA / ASE / extensions to translate.h header.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201214183739.500368-16-f4bug@amsat.org>
Some FPU / Coprocessor translation functions / registers can be
used by ISA / ASE / extensions out of the big translate.c file.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201214183739.500368-15-f4bug@amsat.org>
gen_reserved_instruction() is easier to read than
generate_exception_end(ctx, EXCP_RI), replace it.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201214183739.500368-12-f4bug@amsat.org>
generate_exception_err(err=0) is simply generate_exception_end().
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201214183739.500368-11-f4bug@amsat.org>
Some CPU translation functions / registers / macros and
definitions can be used by ISA / ASE / extensions out of
the big translate.c file. Declare them in "translate.h".
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201207235539.4070364-3-f4bug@amsat.org>
Extract DisasContext to a new 'translate.h' header so
different translation files (ISA, ASE, extensions)
can use it.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201207235539.4070364-2-f4bug@amsat.org>
This file is not TCG specific, contains CPU definitions
and is consumed by cpu.c. Rename it as such.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201214183739.500368-10-f4bug@amsat.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201206233949.3783184-15-f4bug@amsat.org>
We are going to move this code, fix its style first.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201206233949.3783184-14-f4bug@amsat.org>
This file contains functions related to TLB management,
rename it as 'tlb_helper.c'.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201206233949.3783184-13-f4bug@amsat.org>
The rest of helper.c is TLB related. Extract the non TLB
specific functions to cpu.c, so we can rename helper.c as
tlb_helper.c in the next commit.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201214183739.500368-6-f4bug@amsat.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201214183739.500368-5-f4bug@amsat.org>
To help understand ifdef'ry, add comment after #endif.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201214183739.500368-4-f4bug@amsat.org>
Extract FPU specific helpers from "internal.h" to "fpu_helper.h".
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20201120210844.2625602-2-f4bug@amsat.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201214183739.500368-2-f4bug@amsat.org>
The MIPS ISA release 6 is common to 32/64-bit CPUs.
To avoid holes in the insn_flags type, update the
definition with the next available bit.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210104221154.3127610-16-f4bug@amsat.org>
The MIPS ISA release 5 is common to 32/64-bit CPUs.
To avoid holes in the insn_flags type, update the
definition with the next available bit.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210104221154.3127610-15-f4bug@amsat.org>
The MIPS ISA release 3 is common to 32/64-bit CPUs.
To avoid holes in the insn_flags type, update the
definition with the next available bit.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210104221154.3127610-14-f4bug@amsat.org>
The MIPS ISA release 2 is common to 32/64-bit CPUs.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210104221154.3127610-13-f4bug@amsat.org>
The MIPS ISA release '1' is common to 32/64-bit CPUs.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210104221154.3127610-12-f4bug@amsat.org>
Use the single ISA_MIPS32R6 definition to check if the Release 6
ISA is supported, whether the CPU support 32/64-bit.
For now we keep '32' in the definition name, we will rename it
as ISA_MIPS_R6 in few commits.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210104221154.3127610-11-f4bug@amsat.org>
Use the single ISA_MIPS32R5 definition to check if the Release 5
ISA is supported, whether the CPU support 32/64-bit.
For now we keep '32' in the definition name, we will rename it
as ISA_MIPS_R5 in few commits.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210104221154.3127610-10-f4bug@amsat.org>