Implement the MVE VMULL (polynomial) insn. Unlike Neon, this comes
in two flavours: 8x8->16 and a 16x16->32. Also unlike Neon, the
inputs are in either the low or the high half of each double-width
element.
The assembler for this insn indicates the size with "P8" or "P16",
encoded into bit 28 as size = 0 or 1. We choose to follow the
same encoding as VQDMULL and decode this into a->size as MO_16
or MO_32 indicating the size of the result elements. This then
carries through to the helper function names where it then
matches up with the existing pmull_h() which does an 8x8->16
operation and a new pmull_w() which does the 16x16->32.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
For vector loads, predicated elements are zeroed, instead of
retaining their previous values (as happens for most data
processing operations). This means we need to distinguish
"beat not executed due to ECI" (don't touch destination
element) from "beat executed but predicated out" (zero
destination element).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
We were not paying attention to the ECI state when advancing the VPT
state. Architecturally, VPT state advance happens for every beat
(see the pseudocode VPTAdvance()), so on every beat the 4 bits of
VPR.P0 corresponding to the current beat are inverted if required,
and at the end of beats 1 and 3 the VPR MASK fields are updated.
This means that if the ECI state says we should not be executing all
4 beats then we need to skip some of the updating of the VPR that we
currently do in mve_advance_vpt().
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
In some situations we need a mask telling us which parts of the
vector correspond to beats that are not being executed because of
ECI, separately from the combined "which bytes are predicated away"
mask. Factor this mask calculation out of mve_element_mask() into
its own function.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
In mve_element_mask(), we calculate a mask for tail predication which
should have a number of 1 bits based on the value of LR. However,
our MAKE_64BIT_MASK() macro has undefined behaviour when passed a
zero length. Special case this to give the all-zeroes mask we
require.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
We got an edge case wrong in the 48-bit SQRSHRL implementation: if
the shift is to the right, although it always makes the result
smaller than the input value it might not be within the 48-bit range
the result is supposed to be if the input had some bits in [63..48]
set and the shift didn't bring all of those within the [47..0] range.
Handle this similarly to the way we already do for this case in
do_uqrshl48_d(): extend the calculated result from 48 bits,
and return that if not saturating or if it doesn't change the
result; otherwise fall through to return a saturated value.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
In do_sqrshl48_d() and do_uqrshl48_d() we got some of the edge
cases wrong and failed to saturate correctly:
(1) In do_sqrshl48_d() we used the same code that do_shrshl_bhs()
does to obtain the saturated most-negative and most-positive 48-bit
signed values for the large-shift-left case. This gives (1 << 47)
for saturate-to-most-negative, but we weren't sign-extending this
value to the 64-bit output as the pseudocode requires.
(2) For left shifts by less than 48, we copied the "8/16 bit" code
from do_sqrshl_bhs() and do_uqrshl_bhs(). This doesn't do the right
thing because it assumes the C type we're working with is at least
twice the number of bits we're saturating to (so that a shift left by
bits-1 can't shift anything off the top of the value). This isn't
true for bits == 48, so we would incorrectly return 0 rather than the
most-positive value for situations like "shift (1 << 44) right by
20". Instead check for saturation by doing the shift and signextend
and then testing whether shifting back left again gives the original
value.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
In the MVE helpers for the narrowing operations (DO_VSHRN and
DO_VSHRN_SAT) we were using the wrong bits of the predicate mask for
the 'top' versions of the insn. This is because the loop works over
the double-sized input elements and shifts the predicate mask by that
many bits each time, but when we write out the half-sized output we
must look at the mask bits for whichever half of the element we are
writing to.
Correct this by shifting the whole mask right by ESIZE bits for the
'top' insns. This allows us also to simplify the saturation bit
checking (where we had noticed that we needed to look at a different
mask bit for the 'top' insn.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
A cut-and-paste error meant we handled signed VADDV like
unsigned VADDV; fix the type used.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
In the MVE shift-and-insert insns, we special case VSLI by 0
and VSRI by <dt>. VSRI by <dt> means "don't update the destination",
which is what we've implemented. However VSLI by 0 is "set
destination to the input", so we don't want to use the same
special-casing that we do for VSRI by <dt>.
Since the generic logic gives the right answer for a shift
by 0, just use that.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Include the MVE VPR register value in the CPU dumps produced by
arm_cpu_dump_state() if we are printing FPU information. This
makes it easier to interpret debug logs when predication is
active.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Although the architecture doesn't define it as an alias, VMOVL
(vector move long) is encoded as a VSHLL with a zero shift.
Add a comment in the decode file noting that we handle VMOVL
as part of VSHLL.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
~0UL has 64 bits on Linux and 32 bits on Windows.
Fixes: https://gitlab.com/qemu-project/qemu/-/issues/512
Reported-by: Volker Rümelin <vr_qemu@t-online.de>
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210812111056.26926-1-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Raised exceptions don't return, so mark the helper with noreturn.
Fixes: 032c76bc6f ("nios2: Add architecture emulation support")
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210729101315.2318714-1-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The shift constant was incorrect, causing int_prio to always be zero.
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
[Rewritten commit message since v1 had already been included. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
VMRUN exits with SVM_EXIT_ERR if either:
* The event injected has a reserved type.
* When the event injected is of type 3 (exception), and the vector that
has been specified does not correspond to an exception.
This does not fix the entire exc_inj test in kvm-unit-tests.
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210725090855.19713-1-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Coverity reports potential NULL pointer dereference in
get_supported_hv_cpuid_legacy() when 'cs->kvm_state' is NULL. While
'cs->kvm_state' can indeed be NULL in hv_cpuid_get_host(),
kvm_hyperv_expand_features() makes sure that it only happens when
KVM_CAP_SYS_HYPERV_CPUID is supported and KVM_CAP_SYS_HYPERV_CPUID
implies KVM_CAP_HYPERV_CPUID so get_supported_hv_cpuid_legacy() is
never really called. Add asserts to strengthen the protection against
broken KVM behavior.
Coverity: CID 1458243
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210716115852.418293-1-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In commit 8f0a4b6a9b, we started to require L=0 for ppc32 to match what
The Programming Environments Manual say:
"For 32-bit implementations, the L field must be cleared, otherwise
the instruction form is invalid."
The stricter behavior, however, broke AROS boot on sam460ex, which is a
regression from 6.0. This patch partially reverts the change, raising
the exception only for CPUs known to require L=0 (e500 and e500mc) and
logging a guest error for other cases.
Both behaviors are acceptable by the PowerISA, which allows "the system
illegal instruction error handler to be invoked or yield boundedly
undefined results."
Reported-by: BALATON Zoltan <balaton@eik.bme.hu>
Fixes: 8f0a4b6a9b ("target/ppc: Move cmp/cmpi/cmpl/cmpli to decodetree")
Tested-by: BALATON Zoltan <balaton@eik.bme.hu>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20210720135507.2444635-1-matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Mirror the behavour of /proc/sys/abi/sve_default_vector_length
under the real linux kernel. We have no way of passing along
a real default across exec like the kernel can, but this is a
decent way of adjusting the startup vector length of a process.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/482
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20210723203344.968563-4-richard.henderson@linaro.org
[PMM: tweaked docs formatting, document -1 special-case,
added fixup patch from RTH mentioning QEMU's maximum veclen.]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename from sve_zcr_get_valid_len and make accessible
from outside of helper.c.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20210723203344.968563-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Currently, our only caller is sve_zcr_len_for_el, which has
already masked the length extracted from ZCR_ELx, so the
masking done here is a nop. But we will shortly have uses
from other locations, where the length will be unmasked.
Saturate the length to ARM_MAX_VQ instead of truncating to
the low 4 bits.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20210723203344.968563-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Missed in commit f3478392 "docs: Move deprecation, build
and license info out of system/"
Signed-off-by: Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20210723065828.1336760-1-maozhongyi@cmss.chinamobile.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For M-profile, we weren't reporting alignment faults triggered by the
generic TCG code correctly to the guest. These get passed into
arm_v7m_cpu_do_interrupt() as an EXCP_DATA_ABORT with an A-profile
style exception.fsr value of 1. We didn't check for this, and so
they fell through into the default of "assume this is an MPU fault"
and were reported to the guest as a data access violation MPU fault.
Report these alignment faults as UsageFaults which set the UNALIGNED
bit in the UFSR.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210723162146.5167-4-peter.maydell@linaro.org
In do_v7m_exception_exit(), we perform various checks as part of
performing the exception return. If one of these checks fails, the
architecture requires that we take an appropriate exception on the
existing stackframe. We implement this by calling
v7m_exception_taken() to set up to take the new exception, and then
immediately returning from do_v7m_exception_exit() without proceeding
any further with the unstack-and-exception-return process.
In a couple of checks that are new in v8.1M, we forgot the "return"
statement, with the effect that if bad code in the guest tripped over
these checks we would set up to take a UsageFault exception but then
blunder on trying to also unstack and return from the original
exception, with the probable result that the guest would crash.
Add the missing return statements.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210723162146.5167-3-peter.maydell@linaro.org
For M-profile, unlike A-profile, the low 2 bits of SP are defined to be
RES0H, which is to say that they must be hardwired to zero so that
guest attempts to write non-zero values to them are ignored.
Implement this behaviour by masking out the low bits:
* for writes to r13 by the gdbstub
* for writes to any of the various flavours of SP via MSR
* for writes to r13 via store_reg() in generated code
Note that all the direct uses of cpu_R[] in translate.c are in places
where the register is definitely not r13 (usually because that has
been checked for as an UNDEFINED or UNPREDICTABLE case and handled as
UNDEF).
All the other writes to regs[13] in C code are either:
* A-profile only code
* writes of values we can guarantee to be aligned, such as
- writes of previous-SP-value plus or minus a 4-aligned constant
- writes of the value in an SP limit register (which we already
enforce to be aligned)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210723162146.5167-2-peter.maydell@linaro.org
the signal handler was not called.
Patch 1/2 fixes the Hexagon target
Patch 2/2 drops include qemu.h from target/hexagon/op_helper.c
**** Changes in v2 ****
Drop changes to linux-test.c due to intermittent failures on riscv
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJg/doaAAoJEHsCRPsS3kQi/gIH+gJ6GBmIb7NNDt+tRYjsnOpZ
QgmkM/cvOBhqo+dUkxWIDXA7i7ZzytBHHG5GoplVkZjm/S+e5aEsuEyqwL6KbcK7
kB6NvnHA3n9npf5MGcUduHlvPPzDsO7Z4SLrfwkIliiWL/AJ4FzKqEGoviWv2YnN
k+29YDSv11B1jgXriADBJVnWtCf2CGPsF7BiKMcguZ6Bj+q+fH1cPpe2EWN8R8n2
D+La/M5qWEC2FcWPCkrCs61Pi/cV+L4M0IA6JAEm8K+MtoDWmsCNWaVsakiNWWg+
FRiHg45z3cCBvQ+SLQQ4SvsaQriI3M/yIKD6ABNgAfurIiTj4YbHAbeTmfEFYOs=
=YdBn
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/quic/tags/pull-hex-20210725' into staging
The Hexagon target was silently failing the SIGSEGV test because
the signal handler was not called.
Patch 1/2 fixes the Hexagon target
Patch 2/2 drops include qemu.h from target/hexagon/op_helper.c
**** Changes in v2 ****
Drop changes to linux-test.c due to intermittent failures on riscv
# gpg: Signature made Sun 25 Jul 2021 22:39:38 BST
# gpg: using RSA key 7B0244FB12DE4422
# gpg: Good signature from "Taylor Simpson (Rock on) <tsimpson@quicinc.com>" [undefined]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 3635 C788 CE62 B91F D4C5 9AB4 7B02 44FB 12DE 4422
* remotes/quic/tags/pull-hex-20210725:
target/hexagon: Drop include of qemu.h
Hexagon (target/hexagon) remove put_user_*/get_user_*
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Some cpu properties have to be set only for cpu models in builtin_x86_defs,
registered with x86_register_cpu_model_type, and not for
cpu models "base", "max", and the subclass "host".
These properties are the ones set by function x86_cpu_apply_props,
(also including kvm_default_props, tcg_default_props),
and the "vendor" property for the KVM and HVF accelerators.
After recent refactoring of cpu, which also affected these properties,
they were instead set unconditionally for all x86 cpus.
This has been detected as a bug with Nested on AMD with cpu "host",
as svm was not turned on by default, due to the wrongful setting of
kvm_default_props via x86_cpu_apply_props, which set svm to "off".
Rectify the bug introduced in commit "i386: split cpu accelerators"
and document the functions that are builtin_x86_defs-only.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Tested-by: Alexander Bulekov <alxndr@bu.edu>
Fixes: f5cc5a5c ("i386: split cpu accelerators from cpu.c,"...)
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/477
Message-Id: <20210723112921.12637-1-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
All MBZ in CR3 must be zero (APM2 15.5)
Added checks in both helper_vmrun and helper_write_crN.
When EFER.LMA is zero the upper 32 bits needs to be zeroed.
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210723112740.45962-1-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
EFER.SVME has to be set, and EFER reserved bits must
be zero.
In addition the combinations
* EFER.LMA or EFER.LME is non-zero and the processor does not support LM
* non-zero EFER.LME and CR0.PG and zero CR4.PAE
* non-zero EFER.LME and CR0.PG and zero CR0.PE
* non-zero EFER.LME, CR0.PG, CR4.PAE, CS.L and CS.D
are all invalid.
(AMD64 Architecture Programmer's Manual, V2, 15.5)
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210721152651.14683-3-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
All MBZ bits in CR4 must be zero. (APM2 15.5)
Added reserved bitmask and added checks in both
helper_vmrun and helper_write_crN.
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210721152651.14683-2-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The APM2 states that The processor takes a virtual INTR interrupt
if V_IRQ and V_INTR_PRIO indicate that there is a virtual interrupt pending
whose priority is greater than the value in V_TPR.
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210721152651.14683-1-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The qemu.h file is a CONFIG_USER_ONLY header; it doesn't appear on
the include path for softmmu builds. Currently we include it
unconditionally in target/hexagon/op_helper.c. We used to need it
for the put_user_*() and get_user_*() functions, but now that we have
removed the uses of those from op_helper.c, the only reason it's
still there is that we're implicitly relying on it pulling in some
other headers.
Explicitly include the headers we need for other functions, and drop
the include of qemu.h.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20210717103017.20491-1-peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Replace put_user_* with cpu_st*_data_ra
Replace get_user_* with cpu_ld*_data_ra
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <1626384156-6248-2-git-send-email-tsimpson@quicinc.com>
The hook is now unused, with breakpoints checked outside translation.
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Ensure at registration that all breakpoints are in
code space, not data space.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Return false for RF set, as we do in i386_tr_breakpoint_check.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reuse the code at the bottom of helper_check_breakpoints,
which is what we currently call from *_tr_breakpoint_check.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We are certain of a page crossing here, entering the
PALcode image, so the call to use_goto_tb that should
have been here will never succeed.
We are shortly going to add an assert to tcg_gen_goto_tb
that would trigger for this case.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Always provide the atomic interface using TCGMemOpIdx oi
and uintptr_t retaddr. Rename from helper_* to cpu_* so
as to (mostly) match the exec/cpu_ldst.h functions, and
to emphasize that they are not callable from TCG directly.
Tested-by: Cole Robinson <crobinso@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The Neon and SVE decoders use private 'plus1' functions to implement
"add one" for the !function decoder syntax. We have a generic
"plus_1" function in translate.h, so use that instead.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20210715095341.701-1-peter.maydell@linaro.org
The functions vmsa_ttbcr_write and vmsa_ttbcr_raw_write expect
the offset to be for the complete TCR structure, not the offset
to the low 32-bits of a uint64_t. Using offsetoflow32 in this
case breaks big-endian hosts.
For TTBCR2, we do want the high 32-bits of a uint64_t.
Use cp15.tcr_el[*].raw_tcr as the offsetofhigh32 argument to
clarify this.
Buglink: https://gitlab.com/qemu-project/qemu/-/issues/187
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210709230621.938821-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The specification mandates for certain bits to be hardwired in the
hypervisor delegation registers. This was not being enforced.
Signed-off-by: Jose Martins <josemartins90@gmail.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210522155902.374439-1-josemartins90@gmail.com
[ Changes by AF:
- Improve indentation
- Convert delegable_excps to a #define to avoid failures with GCC 8
]
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The following check:
if (!env->debugger && !riscv_cpu_fp_enabled(env)) {
return -RISCV_EXCP_ILLEGAL_INST;
}
is redundant in fflags/frm/fcsr read/write routines, as the check was
already done in fs().
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210627120604.11116-1-bmeng.cn@gmail.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
%s/CSP/CSR
%s/thie/the
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210627115716.3552-1-bmeng.cn@gmail.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Implement x86 fcs:fip, fds:fdp.
Trivial x86 watchpoint cleanup.
-----BEGIN PGP SIGNATURE-----
iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmDtwQ0dHHJpY2hhcmQu
aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV/GnAf/SYNhdmIuKCWk/uk8
IC0v2sm5KHVFfkfkobQ+04pFB26tX557i2zTtEfj/A5QVlJSvliZowCVIO6JV63N
9oedLSzdqrxRqDb+Mpmkwnam/k5XfrC20V7os17FuZE98u3Jgky8QNs7Uxq0bCBZ
01AKB9HNRFKeY2o55IxPwC7CLtyz3SStJJP28aa5ROYK7MIP303qsI5pezgkHgGo
/qo5GXwHs/Pu4pnFuAJyOfG38wT6uTt7NrAGjTH0VhbAKNMSP/QND+VvxbuCugZR
6MEVeb+rLy+MN4b3dH6kI89JQvQGBCaWZD/eTF5+8UDPj3I8vpRqufRh8l5WukT1
Q2g1zA==
=eqkT
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/rth-gitlab/tags/pull-misc-20210713' into staging
Cleanup alpha, hppa, or1k wrt tcg_constant_tl.
Implement x86 fcs:fip, fds:fdp.
Trivial x86 watchpoint cleanup.
# gpg: Signature made Tue 13 Jul 2021 17:36:29 BST
# gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg: issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full]
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F
* remotes/rth-gitlab/tags/pull-misc-20210713:
target/hppa: Clean up DisasCond
target/hppa: Use tcg_constant_*
target/openrisc: Use dc->zero in gen_add, gen_addc
target/openrisc: Cache constant 0 in DisasContext
target/openrisc: Use tcg_constant_tl for dc->R0
target/openrisc: Use tcg_constant_*
target/alpha: Use tcg_constant_* elsewhere
target/alpha: Use tcg_constant_i64 for zero and lit
target/alpha: Use dest_sink for HW_RET temporary
target/alpha: Store set into rx flag
target/i386: Correct implementation for FCS, FIP, FDS and FDP
target/i386: Split out do_fninit
target/i386: Trivial code motion and code style fix
target/i386: Tidy hw_breakpoint_remove
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The a0_is_n flag is redundant with comparing a0 to cpu_psw_n.
The a1_is_0 flag can be removed by initializing a1 to $0,
which also means that cond_prep can be removed entirely.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Replace uses of tcg_const_* with the allocate and free close together.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We still need the t0 temporary for computing overflow,
but we do not need to initialize it to zero first.
Reviewed-by: Stafford Horne <shorne@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We are virtually certain to have fetched constant 0 once, at the
beginning of the TB, so we might as well use it elsewhere.
Reviewed-by: Stafford Horne <shorne@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The temp allocated for tcg_const_tl is auto-freed at branches,
but pure constants are not. So we can remove the extra hoop
jumping in trans_l_swa.
Reviewed-by: Stafford Horne <shorne@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Replace uses of tcg_const_* allocate and free close together
with tcg_constant_*.
Reviewed-by: Stafford Horne <shorne@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Replace the remaining uses of tcg_const_*. These uses are
all local, with the allocate and free close together.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
These constant temps do not need to be freed, and
therefore need less bookkeeping from tcg producers.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This temp is automatically freed, just like ctx->lit.
But we're about to remove ctx->lit, so use sink instead.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
A paste-o meant that we wrote back the existing value
of the RX flag rather than changing it to TMP.
Use tcg_constant_i64 while we're at it.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Update FCS:FIP and FDS:FDP according to the Intel Manual Vol.1 8.1.8.
Note that CPUID.(EAX=07H,ECX=0H):EBX[bit 13] is not implemented by
design in this patch and will be added along with TCG features flag
in a separate patch later.
Signed-off-by: Ziqiao Kong <ziqiaokong@gmail.com>
Message-Id: <20210530150112.74411-2-ziqiaokong@gmail.com>
[rth: Push FDS/FDP handling down into mod != 3 case; free last_addr.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Do not call helper_fninit directly from helper_xrstor.
Do call the new helper from do_fsave.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
A new pair of braces has to be added to declare variables in the case block.
The code style is also fixed according to the transalte.c itself during the
code motion.
Signed-off-by: Ziqiao Kong <ziqiaokong@gmail.com>
Message-Id: <20210530150112.74411-1-ziqiaokong@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Since cpu_breakpoint and cpu_watchpoint are in a union,
the code should access only one of them.
Signed-off-by: Dmitry Voronetskiy <davoronetskiy@gmail.com>
Message-Id: <20210613180838.21349-1-davoronetskiy@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
A AMD server typically has cpuid level 0x10(test on Rome/Milan), it
should not be changed to 0x1f in multi-dies case.
* to maintain compatibility with older machine types, only implement
this change when the CPU's "x-vendor-cpuid-only" property is false
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: zhenwei pi <pizhenwei@bytedance.com>
Fixes: a94e142899 (target/i386: Add CPUID.1F generation support for multi-dies PCMachine)
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
Message-Id: <20210708170641.49410-1-michael.roth@amd.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Currently all built-in CPUs report cache information via CPUID leaves 2
and 4, but these have never been defined for AMD. In the case of
SEV-SNP this can cause issues with CPUID enforcement. Address this by
allowing CPU types to suppress these via a new "x-vendor-cpuid-only"
CPU property, which is true by default, but switched off for older
machine types to maintain compatibility.
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: zhenwei pi <pizhenwei@bytedance.com>
Suggested-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
Message-Id: <20210708003623.18665-1-michael.roth@amd.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
When Hyper-V SynIC is enabled, we may need to allow Windows guests to make
hypercalls (POST_MESSAGES/SIGNAL_EVENTS). No issue is currently observed
because KVM is very permissive, allowing these hypercalls regarding of
guest visible CPUid bits.
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210608120817.1325125-9-vkuznets@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
According to TLFS, Hyper-V guest is supposed to check
HV_HYPERCALL_AVAILABLE privilege bit before accessing
HV_X64_MSR_GUEST_OS_ID/HV_X64_MSR_HYPERCALL MSRs but at least some
Windows versions ignore that. As KVM is very permissive and allows
accessing these MSRs unconditionally, no issue is observed. We may,
however, want to tighten the checks eventually. Conforming to the
spec is probably also a good idea.
Enable HV_HYPERCALL_AVAILABLE bit unconditionally.
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210608120817.1325125-8-vkuznets@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
hv_cpuid_check_and_set() does too much:
- Checks if the feature is supported by KVM;
- Checks if all dependencies are enabled;
- Sets the feature bit in cpu->hyperv_features for 'passthrough' mode.
To reduce the complexity, move all the logic except for dependencies
check out of it. Also, in 'passthrough' mode we don't really need to
check dependencies because KVM is supposed to provide a consistent
set anyway.
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210608120817.1325125-7-vkuznets@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
To make Hyper-V features appear in e.g. QMP query-cpu-model-expansion we
need to expand and set the corresponding CPUID leaves early. Modify
x86_cpu_get_supported_feature_word() to call newly intoduced Hyper-V
specific kvm_hv_get_supported_cpuid() instead of
kvm_arch_get_supported_cpuid(). We can't use kvm_arch_get_supported_cpuid()
as Hyper-V specific CPUID leaves intersect with KVM's.
Note, early expansion will only happen when KVM supports system wide
KVM_GET_SUPPORTED_HV_CPUID ioctl (KVM_CAP_SYS_HYPERV_CPUID).
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210608120817.1325125-6-vkuznets@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Currently, the only eVMCS version, supported by KVM (and described in TLFS)
is '1'. When Enlightened VMCS feature is enabled, QEMU takes the supported
eVMCS version range (from KVM_CAP_HYPERV_ENLIGHTENED_VMCS enablement) and
puts it to guest visible CPUIDs. When (and if) eVMCS ver.2 appears a
problem on migration is expected: it doesn't seem to be possible to migrate
from a host supporting eVMCS ver.2 to a host, which only support eVMCS
ver.1.
Hardcode eVMCS ver.1 as the result of 'hv-evmcs' enablement for now. Newer
eVMCS versions will have to have their own enablement options (e.g.
'hv-evmcs=2').
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20210608120817.1325125-4-vkuznets@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Linking on Haiku OS fails:
/boot/system/develop/tools/bin/../lib/gcc/x86_64-unknown-haiku/8.3.0/../../../../x86_64-unknown-haiku/bin/ld:
error: libqemu-mips-softmmu.fa.p/target_mips_tcg_sysemu_mips-semi.c.o(.rodata) is too large (0xffff405a bytes)
/boot/system/develop/tools/bin/../lib/gcc/x86_64-unknown-haiku/8.3.0/../../../../x86_64-unknown-haiku/bin/ld:
final link failed: memory exhausted
collect2: error: ld returned 1 exit status
This is because the host_to_mips_errno[] uses errno as index,
for example:
static const uint16_t host_to_mips_errno[] = {
[ENAMETOOLONG] = 91,
...
and Haiku defines [*] ENAMETOOLONG as:
12 /* Error baselines */
13 #define B_GENERAL_ERROR_BASE INT_MIN
..
22 #define B_STORAGE_ERROR_BASE (B_GENERAL_ERROR_BASE + 0x6000)
...
106 #define B_NAME_TOO_LONG (B_STORAGE_ERROR_BASE + 4)
...
211 #define ENAMETOOLONG B_TO_POSIX_ERROR(B_NAME_TOO_LONG)
so the array ends up beeing indeed too big.
Since POSIX errno can't be use as indexes on Haiku,
rewrite errno_mips() using a switch statement.
[*] https://github.com/haiku/haiku/blob/r1beta3/headers/os/support/Errors.h#L130
Reported-by: Richard Zak <richard.j.zak@gmail.com>
Suggested-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210706130723.1178961-1-f4bug@amsat.org>
Introduce the SQ opcode (Store Quadword).
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210214175912.732946-27-f4bug@amsat.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Introduce the LQ opcode (Load Quadword) and remove unreachable code.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210214175912.732946-26-f4bug@amsat.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Introduce the PPACW opcode (Parallel Pack to Word).
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210214175912.732946-22-f4bug@amsat.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Introduce the 'Parallel Compare for Greater Than' opcodes:
- PCGTB (Parallel Compare for Greater Than Byte)
- PCGTH (Parallel Compare for Greater Than Halfword)
- PCGTW (Parallel Compare for Greater Than Word)
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210309145653.743937-15-f4bug@amsat.org>
Introduce the PEXTUW opcode (Parallel Extend Upper from Word).
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210309145653.743937-12-f4bug@amsat.org>
The loop is performing a simple boolean test for the existence
of a BP_CPU breakpoint at EIP. Plus it gets the iteration wrong,
if we happen to have a BP_GDB breakpoint at the same address.
We have a function for this: cpu_breakpoint_test.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20210620062317.1399034-1-richard.henderson@linaro.org>
The errno numbers are very large on Haiku, so the linking currently
fails there with a "final link failed: memory exhausted" error
message. We should not use the errno number as array indexes here,
thus convert the code to a switch-case statement instead. A clever
compiler should be able to optimize this code in a similar way
anway.
Reported-by: Richard Zak <richard.j.zak@gmail.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210706081822.1316551-1-thuth@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
The non-single-step case of gen_goto_tb may use
tcg_gen_lookup_and_goto_ptr to indirectly chain.
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Just use translator_use_goto_tb directly at the one call site,
rather than maintaining a local wrapper.
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We have not needed to end a TB for I/O since ba3e792669
("icount: clean up cpu_can_io at the entry to the block").
In use_goto_tb, the check for singlestep_enabled is in the
generic translator_use_goto_tb. In s390x_tr_tb_stop, the
check for singlestep_enabled is in the preceding do_debug test.
Which leaves only FLAG_MASK_PER: fold that test alone into
the two callers of use_exit tb.
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Just use translator_use_goto_tb directly at the one call site,
rather than maintaining a local wrapper.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Just use translator_use_goto_tb directly at the one call site,
rather than maintaining a local wrapper.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reorder the control statements to allow using the page boundary
check from translator_use_goto_tb().
Reviewed-by: Stafford Horne <shorne@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Just use translator_use_goto_tb directly at the one call site,
rather than maintaining a local wrapper.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Do not emit dead code for the singlestep_enabled case,
after having exited the TB with a debug exception.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Just use translator_use_goto_tb directly at the one call site,
rather than maintaining a local wrapper.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Just use translator_use_goto_tb directly at the one call site,
rather than maintaining a local wrapper.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Just use translator_use_goto_tb directly at the one call site,
rather than maintaining a local wrapper.
Acked-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Just use translator_use_goto_tb directly at the one call site,
rather than maintaining a local wrapper.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The test for singlestepping is done in translator_use_goto_tb,
so we may elide it from cris_tr_tb_stop.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
All of these helpers end with cpu_loop_exit.
Reviewed-by: Michael Rolnik <mrolnik@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Single stepping is not the only reason not to use goto_tb.
If goto_tb is disallowed, and single-stepping is not enabled,
then use tcg_gen_lookup_and_goto_tb to indirectly chain.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Just use translator_use_goto_tb directly at the one call site,
rather than maintaining a local wrapper.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We have not needed to end a TB for I/O since ba3e792669
("icount: clean up cpu_can_io at the entry to the block"),
and gdbstub singlestep is handled by the generic function.
Drop the unused 'n' argument to use_goto_tb.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Using gen_goto_tb directly misses the single-step check.
Let the branch or debug exception be emitted by arm_tr_tb_stop.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The number of links across (normal) pages using this is low,
and it will shortly violate the contract for breakpoints.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We have not needed to end a TB for I/O since ba3e792669
("icount: clean up cpu_can_io at the entry to the block").
We do not need to use exit_tb for singlestep, which only
means generate one insn per TB.
Which leaves only singlestep_enabled, which means raise a
debug trap after every TB, which does not use exit_tb,
which would leave the function mis-named.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The root trace-events only declares a single TCG event:
$ git grep -w tcg trace-events
trace-events:115:# tcg/tcg-op.c
trace-events:137:vcpu tcg guest_mem_before(TCGv vaddr, uint16_t info) "info=%d", "vaddr=0x%016"PRIx64" info=%d"
and only a tcg/tcg-op.c uses it:
$ git grep -l trace_guest_mem_before_tcg
tcg/tcg-op.c
therefore it is pointless to include "trace-tcg.h" in each target
(because it is not used). Remove it.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210629050935.2570721-1-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Add a target-specific Kconfig. We need the definitions in Kconfig so
the minikconf tool can verify they exits. However CONFIG_FOO is only
enabled for target foo via the meson.build rules.
Two architecture have a particularity, ARM and MIPS. As their
translators have been split you can potentially build a plain 32 bit
build along with a 64-bit version including the 32-bit subset.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210131111316.232778-6-f4bug@amsat.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210707131744.26027-2-alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use raise_exception_ra (without error code) when raising the illegal
opcode operation; raise #GP when setting bits 63:32 of DR6 or DR7.
Move helper_get_dr to sysemu/ since it is a privileged instruction
that is not needed on user-mode emulators.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
DR6[63:32] and DR7[63:32] are reserved and need to be zero.
(AMD64 Architecture Programmer's Manual, V2, 15.5)
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210705081802.18960-3-laramglazier@gmail.com>
[Ignore for 32-bit builds. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The address of the last entry in the MSRPM and
in the IOPM must be smaller than the largest physical address.
(APM2 15.10-15.11)
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210705081802.18960-2-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Nick Hudson <hnick@vmware.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
If KVM_CAP_RPT_INVALIDATE KVM capability is enabled, then
- indicate the availability of H_RPT_INVALIDATE hcall to the guest via
ibm,hypertas-functions property.
- Enable the hcall
Both the above are done only if the new sPAPR machine capability
cap-rpt-invalidate is set.
Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
Message-Id: <20210706112440.1449562-3-bharata@linux.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The function ppc_tlb_invalid_all is not compiled anymore in a TCG-less
environment, and the call to that function has been disabled in this
situation
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Message-Id: <20210708164957.28096-2-lucas.araujo@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Change the assert in ppc_store_sdr1() to allow vhyp to be set on CPUs
without HV bit. This allows using the vhyp interface for firmware
emulation on pegasos2.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Message-Id: <21c7745aabbb68fcc50bb2ffaf16b939ba21261c.1624811233.git.balaton@eik.bme.hu>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
MSR is a 32-bit register in BookE and there is no mtmsrd instruction.
Cc: Christian Zigotzky <chzigotzky@xenosoft.de>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20210706051321.609046-1-npiggin@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Changed hash32 address translation to use the supplied mmu_idx, instead
of using what was stored in the msr, for parity purposes (radix64
already uses that) and for conceptual correctness, all the relevant
functions should always use the supplied mmu_idx, as there are no
guarantees that the mmu_idx stored in the CPU variable will not desync.
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20210706150316.21005-3-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Intrudoce a header common to all BookS MMUs, that can hold code that is
common to hash32 and book3s-v3 MMUs.
Suggested-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Message-Id: <20210706150316.21005-2-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Changed hash64 address translation to use the supplied mmu_idx instead
of using the one stored in the msr, for parity purposes (other book3s
MMUs already use it).
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210628133610.1143-4-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This commit attempts to fix a technical hiccup first mentioned by Richard
Henderson in
https://lists.nongnu.org/archive/html/qemu-devel/2021-05/msg06247.html
To sumarize the hiccup here, when radix-style mmus are translating an
address, they might need to call a second level of translation, with
hypervisor privileges. However, the way it was being done up until
this point meant that the second level translation had the same
privileges as the first level. It could lead to a bug in address
translation when running KVM inside a TCG guest, but this bug was never
experienced by users, so this isn't as much a bug fix as it is a
correctness cleanup.
This patch attempts that cleanup by making radix64_*_xlate functions
receive the mmu_idx, and passing one with the correct permission for the
second level translation.
The mmuidx macros added by this patch are only correct for non-bookE
mmus, because BookE style set the IS and DS bits inverted and there
might be other subtle differences. However, there doesn't seem to be
BookE cpus that have radix-style mmus, so we left a comment there to
document the issue, in case a machine does have that and was missed.
As part of this cleanup, we now need to send the correct mmmu_idx
when calling get_phys_page_debug, otherwise we might not be able to see the
memory that the CPU could
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bruno Larsen (billionai) <bruno.larsen@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Tested-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20210628133610.1143-2-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This function is used by TCGCPUOps, and is thus TCG specific.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210621125115.67717-10-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Create one common dispatch for all of the ppc_*_xlate functions.
Use ppc64_v3_radix to directly dispatch between ppc_radix64_xlate
and ppc_hash64_xlate.
Remove the separate *_handle_mmu_fault and *_get_phys_page_debug
functions, using common code for ppc_cpu_tlb_fill and
ppc_cpu_get_phys_page_debug.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210621125115.67717-9-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Mirror the interface of ppc_radix64_xlate (mostly), putting all
of the logic for older mmu translation into a single entry point.
For booke, we need to add mmu_idx to the xlate-style interface.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210621125115.67717-8-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Mirror the interface of ppc_radix64_xlate, putting all of
the logic for hash32 translation into a single entry point.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210621125115.67717-7-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Mirror the interface of ppc_radix64_xlate, putting all of
the logic for hash64 translation into a single function.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210621125115.67717-6-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Instead of returning non-zero for failure, return true for success.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210621125115.67717-5-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This removes some incomplete duplication between
ppc_radix64_handle_mmu_fault and ppc_radix64_get_phys_page_debug.
The former was correct wrt SPR_HRMOR and the latter was not.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210621125115.67717-4-bruno.larsen@eldorado.org.br>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
These changes were waiting until we didn't need to match
the function type of PowerPCCPUClass.handle_mmu_fault.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210621125115.67717-3-bruno.larsen@eldorado.org.br>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Instead, use a switch on env->mmu_model. This avoids some
replicated information in cpu setup.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210621125115.67717-2-bruno.larsen@eldorado.org.br>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This isn't used anymore.
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <20210622140926.677618-3-groug@kaod.org>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
PowerPC CPUs use big endian by default but starting with POWER7,
server grade CPUs use the ILE bit of the LPCR special purpose
register to decide on the endianness to use when handling
interrupts. This gives a clue to QEMU on the endianness the
guest kernel is running, which is needed when generating an
ELF dump of the guest or when delivering an FWNMI machine
check interrupt.
Commit 382d2db62b ("target-ppc: Introduce callback for interrupt
endianness") added a class method to PowerPCCPUClass to modelize
this : default implementation returns a fixed "big endian" value,
while POWER7 and newer do the LPCR_ILE check. This is suboptimal
as it forces to implement the method for every new CPU family, and
it is very unlikely that this will ever be different than what we
have today.
We basically only have three cases to consider:
a) CPU doesn't have an LPCR => big endian
b) CPU has an LPCR but doesn't support the ILE bit => big endian
c) CPU has an LPCR and supports the ILE bit => little or big endian
Instead of class methods, introduce an inline helper that checks the
ILE bit in the LPCR_MASK to decide on the outcome. The new helper
words little endian instead of big endian. This allows to drop a !
operator in ppc_cpu_do_fwnmi_machine_check().
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <20210622140926.677618-2-groug@kaod.org>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
split sysemu part of cpu models,
also create a tiny _user.c with just the (at least for now),
empty implementation of apply_cpu_model.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Signed-off-by: Cho, Yu-Chen <acho@suse.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210707105324.23400-15-acho@suse.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
move kvm files into kvm/
After the reshuffling, update MAINTAINERS accordingly.
Make use of the new directory:
target/s390x/kvm/
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Signed-off-by: Cho, Yu-Chen <acho@suse.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210707105324.23400-14-acho@suse.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
all function calls are protected by kvm_enabled(),
so we do not need the stubs.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Signed-off-by: Cho, Yu-Chen <acho@suse.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210707105324.23400-13-acho@suse.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Now that we have moved cpu-dump functionality out of helper.c,
we can make the module sysemu-only.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Signed-off-by: Cho, Yu-Chen <acho@suse.com>
Acked-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210707105324.23400-11-acho@suse.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
move everything related to translate, as well as HELPER code in tcg/
mmu_helper.c stays put for now, as it contains both TCG and KVM code.
After the reshuffling, update MAINTAINERS accordingly.
Make use of the new directory:
target/s390x/tcg/
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Signed-off-by: Cho, Yu-Chen <acho@suse.com>
Acked-by: David Hildenbrand <david@redhat.com>
Acked-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210707105324.23400-8-acho@suse.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The internal.h file is renamed to s390x-internal.h, because of the
risk of collision with other files with the same name.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Signed-off-by: Cho, Yu-Chen <acho@suse.com>
Acked-by: David Hildenbrand <david@redhat.com>
Acked-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210707105324.23400-7-acho@suse.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
now that we protect all calls to the tcg-specific functions
with if (tcg_enabled()), we do not need the TCG stub anymore.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Signed-off-by: Cho, Yu-Chen <acho@suse.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210707105324.23400-6-acho@suse.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
the lack of target_user_arch makes it hard to fully leverage the
build system in order to separate user code from sysemu code.
Provide it, so that we can avoid the proliferation of #ifdef
in target code.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Signed-off-by: Cho, Yu-Chen <acho@suse.com>
Acked-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210707105324.23400-2-acho@suse.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The element size is located in m5, not in m4. As there is no m4, qemu
currently crashes with an assertion, trying to lookup that field.
Reproduced and tested via GO, which ends up using VMSL once the
Vector enhancements facility is around for verifying certificates with
elliptic curves.
Reported-by: Jonathan Albrecht <jonathan.albrecht@linux.vnet.ibm.com>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/449
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Fixes: 8c18fa5b3e ("s390x/tcg: Implement VECTOR MULTIPLY SUM LOGICAL")
Message-Id: <20210705090341.58289-1-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The FP-to-integer conversion instructions need to set CC 3 whenever
a "special case" occurs; this is the case whenever the instruction
also signals the IEEE invalid exception. (See e.g. figure 19-18
in the Principles of Operation.)
However, qemu currently will set CC 3 only in the case where the
input was a NaN. This is indeed one of the special cases, but
there are others, most notably the case where the input is out
of range of the target data type.
This patch fixes the problem by switching these instructions to
the "static" CC method and computing the correct result directly
in the helper. (It cannot be re-computed later as the information
about the invalid exception is no longer available.)
This fixes a bug observed when running the wasmtime test suite
under the s390x-linux-user target.
Signed-off-by: Ulrich Weigand <ulrich.weigand@de.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210630105058.GA29130@oc3748833570.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
This defines 5 new facilities and the new 3931 and 3932 machines.
As before the name is not yet known and we do use gen16a and gen16b.
The new features are part of the full model.
The default model is still empty (same as z15) and will be added
in a separate patch at a later point in time.
Also add the dependencies of new facilities and as a fix for z15 add
a dependency from S390_FEAT_VECTOR_PACKED_DECIMAL_ENH to
S390_VECTOR_PACKED_DECIMAL.
[merged <20210701084348.26556-1-borntraeger@de.ibm.com>]
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-Id: <20210622201923.150205-2-borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Given that TCG is now the only consumer of X86XSaveArea, move the
structure definition and associated offset declarations and checks to a
TCG specific header.
Signed-off-by: David Edmondson <david.edmondson@oracle.com>
Message-Id: <20210705104632.2902400-9-david.edmondson@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Rather than relying on the X86XSaveArea structure definition,
determine the offset of XSAVE state areas using CPUID leaf 0xd where
possible (KVM and HVF).
Signed-off-by: David Edmondson <david.edmondson@oracle.com>
Message-Id: <20210705104632.2902400-8-david.edmondson@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Rather than relying on the X86XSaveArea structure definition directly,
the routines that manipulate the XSAVE state area should observe the
offsets declared in the x86_ext_save_areas array.
Currently the offsets declared in the array are derived from the
structure definition, resulting in no functional change.
Signed-off-by: David Edmondson <david.edmondson@oracle.com>
Message-Id: <20210705104632.2902400-7-david.edmondson@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Provide visibility of the x86_ext_save_areas array and associated type
outside of cpu.c.
Signed-off-by: David Edmondson <david.edmondson@oracle.com>
Message-Id: <20210705104632.2902400-6-david.edmondson@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In preparation for removing assumptions about XSAVE area offsets, pass
a buffer pointer and buffer length to the XSAVE helper functions.
Signed-off-by: David Edmondson <david.edmondson@oracle.com>
Message-Id: <20210705104632.2902400-5-david.edmondson@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Replace the hard-coded size of offsets or structure elements with
defined constants or sizeof().
Signed-off-by: David Edmondson <david.edmondson@oracle.com>
Message-Id: <20210705104632.2902400-4-david.edmondson@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Rather than having similar but different checks in cpu.h and kvm.c,
move them all to cpu.h.
Message-Id: <20210705104632.2902400-3-david.edmondson@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Declare and use manifest constants for the XSAVE state component
offsets.
Signed-off-by: David Edmondson <david.edmondson@oracle.com>
Message-Id: <20210705104632.2902400-2-david.edmondson@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
- Extract nanoMIPS, microMIPS, Code Compaction from translate.c
- Allow PCI config accesses smaller than 32-bit on Bonito64 device
- Fix migration of g364fb device on Jazz Magnum
- Fix dp8393x PROM checksum on Jazz Magnum and Quadra 800
- Map the UART devices unconditionally on Jazz Magnum
- Add functional test booting Linux on the Fuloong 2E
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmDfMnMACgkQ4+MsLN6t
wN71nhAArpyoJ5mTkt54wAxZwxvqjWAyesvogV4pLIvOyNJmQcExY/Ly8Qb5dbDg
2PEhCpDU7GlT7oCfgh7O5KrEjnqVfmHQzzbvQ0Ygq9kL5hdjaSSlHB/yeirU7CR1
cMQXfj9kvRVa5Oayt3L+kiKgTA0f1vbGmnveiFxJKJupyVDtursstD3nSZCThSVL
FWdFJbqLzvTY3cQqLBVq7jEnN7LzSYeYnq8Tvri0nuwoBwLJY382IljqsQZrGHzr
Bbya3KFInMrQK5VAM0pOkfvPYXZmtJ8i8VuR6S+IdICiZ+61sknKRUq5z09/4NXA
HaxarWO/fv07qd7q6Z2+i5Q6fiDrV4p2qfHeddM4Xwqvu8O98EPhaBE3veLOiNgr
VxgkJFslI1gstje31qqvNjFxB+cOIBYjWTlIVu1xOuOKGWMjMT+9XcVLseweA2rT
H/nTKnWTAiJ/mxT4KIv59SS0ZQa4QJ3CjYr26AcQ9YrJ9vCMay8rLPEn0iVlhB2H
ZyW4Be84ynEvuAvvuWt1AXvFjT41Zqj4Px1M6Pa15e9eV6guiW1KV13Thb45gJqt
LTQtoME83r3gUhQOcoZaowYh6ffCbjtW3eAcTZP3Zu7fOeBjSye+CvS9NJMtK3Vj
0/YjtqGPUvjNy4sR9VqkRRPMZpHftMTRILxaFm8Y5tlThkAydw0=
=rR+B
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/philmd/tags/mips-20210702' into staging
MIPS patches queue
- Extract nanoMIPS, microMIPS, Code Compaction from translate.c
- Allow PCI config accesses smaller than 32-bit on Bonito64 device
- Fix migration of g364fb device on Jazz Magnum
- Fix dp8393x PROM checksum on Jazz Magnum and Quadra 800
- Map the UART devices unconditionally on Jazz Magnum
- Add functional test booting Linux on the Fuloong 2E
# gpg: Signature made Fri 02 Jul 2021 16:36:19 BST
# gpg: using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE
# gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [full]
# Primary key fingerprint: FAAB E75E 1291 7221 DCFD 6BB2 E3E3 2C2C DEAD C0DE
* remotes/philmd/tags/mips-20210702:
hw/mips/jazz: Map the UART devices unconditionally
hw/mips/jazz: specify correct endian for dp8393x device
hw/m68k/q800: fix PROM checksum and MAC address storage
qemu/bitops.h: add bitrev8 implementation
dp8393x: remove onboard PROM containing MAC address and checksum
hw/m68k/q800: move PROM and checksum calculation from dp8393x device to board
hw/mips/jazz: move PROM and checksum calculation from dp8393x device to board
dp8393x: convert to trace-events
dp8393x: checkpatch fixes
g364fb: add VMStateDescription for G364SysBusState
g364fb: use RAM memory region for framebuffer
tests/acceptance: Test Linux on the Fuloong 2E machine
hw/pci-host/bonito: Allow PCI config accesses smaller than 32-bit
hw/pci-host/bonito: Trace PCI config accesses smaller than 32-bit
target/mips: Extract nanoMIPS ISA translation routines
target/mips: Extract the microMIPS ISA translation routines
target/mips: Extract Code Compaction ASE translation routines
target/mips: Add declarations for generic TCG helpers
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Implement the MVE shifts by register, which perform
shifts on a single general-purpose register.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-19-peter.maydell@linaro.org
Implement the MVE shifts by immediate, which perform shifts
on a single general-purpose register.
These patterns overlap with the long-shift-by-immediates,
so we have to rearrange the grouping a little here.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-18-peter.maydell@linaro.org
Implement the MVE long shifts by register, which perform shifts on a
pair of general-purpose registers treated as a 64-bit quantity, with
the shift count in another general-purpose register, which might be
either positive or negative.
Like the long-shifts-by-immediate, these encodings sit in the space
that was previously the UNPREDICTABLE MOVS/ORRS with Rm==13,15.
Because LSLL_rr and ASRL_rr overlap with both MOV_rxri/ORR_rrri and
also with CSEL (as one of the previously-UNPREDICTABLE Rm==13 cases),
we have to move the CSEL pattern into the same decodetree group.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-17-peter.maydell@linaro.org
The MVE extension to v8.1M includes some new shift instructions which
sit entirely within the non-coprocessor part of the encoding space
and which operate only on general-purpose registers. They take up
the space which was previously UNPREDICTABLE MOVS and ORRS encodings
with Rm == 13 or 15.
Implement the long shifts by immediate, which perform shifts on a
pair of general-purpose registers treated as a 64-bit quantity, with
an immediate shift count between 1 and 32.
Awkwardly, because the MOVS and ORRS trans functions do not UNDEF for
the Rm==13,15 case, we need to explicitly emit code to UNDEF for the
cases where v8.1M now requires that. (Trying to change MOVS and ORRS
is too difficult, because the functions that generate the code are
shared between a dozen different kinds of arithmetic or logical
instruction for all A32, T16 and T32 encodings, and for some insns
and some encodings Rm==13,15 are valid.)
We make the helper functions we need for UQSHLL and SQSHLL take
a 32-bit value which the helper casts to int8_t because we'll need
these helpers also for the shift-by-register insns, where the shift
count might be < 0 or > 32.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-16-peter.maydell@linaro.org
Implement the MVE VADDLV insn; this is similar to VADDV, except
that it accumulates 32-bit elements into a 64-bit accumulator
stored in a pair of general-purpose registers.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-15-peter.maydell@linaro.org
Implement the MVE VSHLC insn, which performs a shift left of the
entire vector with carry in bits provided from a general purpose
register and carry out bits written back to that register.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-14-peter.maydell@linaro.org
Implement the MVE saturating shift-right-and-narrow insns
VQSHRN, VQSHRUN, VQRSHRN and VQRSHRUN.
do_srshr() is borrowed from sve_helper.c.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-13-peter.maydell@linaro.org
Implement the MVE shift-right-and-narrow insn VSHRN and VRSHRN.
do_urshr() is borrowed from sve_helper.c.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-12-peter.maydell@linaro.org
Implement the MVE VSRI and VSLI insns, which perform a
shift-and-insert operation.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-11-peter.maydell@linaro.org
Implement the MVE VHLL (vector shift left long) insn. This has two
encodings: the T1 encoding is the usual shift-by-immediate format,
and the T2 encoding is a special case where the shift count is always
equal to the element size.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-10-peter.maydell@linaro.org
Implement the MVE vector shift right by immediate insns VSHRI and
VRSHRI. As with Neon, we implement these by using helper functions
which perform left shifts but allow negative shift counts to indicate
right shifts.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-9-peter.maydell@linaro.org
Implement the MVE shift-vector-left-by-immediate insns VSHL, VQSHL
and VQSHLU.
The size-and-immediate encoding here is the same as Neon, and we
handle it the same way neon-dp.decode does.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-8-peter.maydell@linaro.org
Implement the MVE logical-immediate insns (VMOV, VMVN,
VORR and VBIC). These have essentially the same encoding
as their Neon equivalents, and we implement the decode
in the same way.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-7-peter.maydell@linaro.org
Use dup_const() instead of bitfield_replicate() in
disas_simd_mod_imm().
(We can't replace the other use of bitfield_replicate() in this file,
in logic_imm_decode_wmask(), because that location needs to handle 2
and 4 bit elements, which dup_const() cannot.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-6-peter.maydell@linaro.org
The A64 AdvSIMD modified-immediate grouping uses almost the same
constant encoding that A32 Neon does; reuse asimd_imm_const() (to
which we add the AArch64-specific case for cmode 15 op 1) instead of
reimplementing it all.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-5-peter.maydell@linaro.org
The function asimd_imm_const() in translate-neon.c is an
implementation of the pseudocode AdvSIMDExpandImm(), which we will
also want for MVE. Move the implementation to translate.c, with a
prototype in translate.h.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-4-peter.maydell@linaro.org
The initial implementation of the MVE VRMLALDAVH and VRMLSLDAVH
insns had some bugs:
* the 32x32 multiply of elements was being done as 32x32->32,
not 32x32->64
* we were incorrectly maintaining the accumulator in its full
72-bit form across all 4 beats of the insn; in the pseudocode
it is squashed back into the 64 bits of the RdaHi:RdaLo
registers after each beat
In particular, fixing the second of these allows us to recast
the implementation to avoid 128-bit arithmetic entirely.
Since the element size here is always 4, we can also drop the
parameterization of ESIZE to make the code a little more readable.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-3-peter.maydell@linaro.org
In do_ldst(), the calculation of the offset needs to be based on the
size of the memory access, not the size of the elements in the
vector. This meant we were getting it wrong for the widening and
narrowing variants of the various VLDR and VSTR insns.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210628135835.6690-2-peter.maydell@linaro.org
If the CPU is running in default NaN mode (FPCR.DN == 1) and we execute
FRSQRTE, FRECPE, or FRECPX with a signaling NaN, parts_silence_nan_frac() will
assert due to fpst->default_nan_mode being set.
To avoid this, we check to see what NaN mode we're running in before we call
floatxx_silence_nan().
Signed-off-by: Joe Komlodi <joe.komlodi@xilinx.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1624662174-175828-2-git-send-email-joe.komlodi@xilinx.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Extract 4900 lines from the huge translate.c to a new file,
'nanomips_translate.c.inc'. As there are too many inter-
dependencies we don't compile it as another object, but
keep including it in the big translate.o. We gain in code
maintainability.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201120210844.2625602-13-f4bug@amsat.org>
Extract 3200+ lines from the huge translate.c to a new file,
'micromips_translate.c.inc'. As there are too many inter-
dependencies we don't compile it as another object, but
keep including it in the big translate.o. We gain in code
maintainability.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201120210844.2625602-12-f4bug@amsat.org>
Extract 1100+ lines from the huge translate.c to a new file,
'mips16e_translate.c.inc'. As there are too many inter-
dependencies we don't compile it as another object, but
keep including it in the big translate.o. We gain in code
maintainability.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201120210844.2625602-10-f4bug@amsat.org>
We want to extract the microMIPS ISA and Code Compaction ASE to
new compilation units.
We will first extract this code as included source files (.c.inc),
then make them new compilation units afterward.
The following methods are going to be used externally:
micromips_translate.c.inc:1778: gen_ldxs(ctx, rs, rt, rd);
micromips_translate.c.inc:1806: gen_align(ctx, 32, rd, rs, ...
micromips_translate.c.inc:2859: gen_addiupc(ctx, reg, offset, ...
mips16e_translate.c.inc:444: gen_addiupc(ctx, ry, offset, ...
To avoid too much code churn, it is simpler to declare these
prototypes in "translate.h" now.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210617174907.2904067-2-f4bug@amsat.org>
There were two bugs here: (1) the required endianness was
not present in the MemOp, and (2) we were not providing a
zero-extended input to the bswap as semantics required.
The best fix is to fold the bswap into the memory operation,
producing the desired result directly.
Acked-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Remove TCG_BSWAP_IZ and the preceding zero-extension.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Use a break instead of an ifdefed else.
There's no need to move the values through s->T0.
Remove TCG_BSWAP_IZ and the preceding zero-extension.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The new bswap flags can implement the semantics exactly.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We can eliminate the requirement for a zero-extended output,
because the following store will ignore any garbage high bits.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
For the sf version, we are performing two 32-bit bswaps
in either half of the register. This is equivalent to
performing one 64-bit bswap followed by a rotate.
For the non-sf version, we can remove TCG_BSWAP_IZ
and the preceding zero-extension.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Implement the new semantics in the fallback expansion.
Change all callers to supply the flags that keep the
semantics unchanged locally.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We always know the exact value of X, that's all that matters.
This avoids splitting the TB e.g. between "ax" and "addq".
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Ever since 2a44f7f173, flagx_known is always true.
Fold away all of the tests against the flag.
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Use movcond instead of brcond to set env_pc.
Discard the btarget and btaken variables to improve
register allocation and avoid unnecessary writeback.
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We can use this in gen_goto_tb and for DISAS_JUMP
to indirectly chain to the next TB.
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Move delayed branch handling to tb_stop, where we can re-use other
end-of-tb code, e.g. the evaluation of flags. Honor single stepping.
Validate that we aren't losing state by overwriting is_jmp.
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Move this pc update into tb_stop.
We will be able to re-use this code shortly.
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
These insns set DISAS_UPDATE without cpustate_changed,
which isn't quite right.
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We really do this already, by including them into the same test.
This just hoists the expression up a bit.
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Do not skip the page check for user-only -- mmap/mprotect can
still change page mappings. Only check dc->base.pc_first, not
dc->ppc -- the start page is the only one that's relevant.
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
After we've raised the exception, we have left the TB.
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The only semantic of DISAS_TB_JUMP is that we've done goto_tb,
which is the same as DISAS_NORETURN -- we've exited the tb.
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This value is unused.
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Migrate the is_jmp, tb and singlestep_enabled fields
from DisasContext into the base.
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Michael Rolnik <mrolnik@gmail.com>
Reviewed-by: Michael Rolnik <mrolnik@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Prepare for receiving it as a pointer input.
Tested-by: Michael Rolnik <mrolnik@gmail.com>
Reviewed-by: Michael Rolnik <mrolnik@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Migrate the bstate, tb and singlestep_enabled fields
from DisasContext into the base.
Tested-by: Michael Rolnik <mrolnik@gmail.com>
Reviewed-by: Michael Rolnik <mrolnik@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We have pre-computed the next instruction address into
dc->base.pc_next, so we might as well use it.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Move handle_instruction into nios2_tr_translate_insn
as the only caller.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Direct assignments to env during translation do not work.
As it happens, the only way we can get here is if env->pc
is already set to dc->pc. We will trap on the first insn
we execute anywhere on the page.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Migrate the is_jmp, tb and singlestep_enabled fields from
DisasContext into the base. Use pc_first instead of tb->pc.
Increment pc_next prior to decode, leaving the address of
the current insn in dc->pc.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We do not need to copy this into DisasContext.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We do not need to copy this into DisasContext.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The only semantic of DISAS_TB_JUMP is that we've done goto_tb,
which is the same as DISAS_NORETURN -- we've exited the tb.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <1622589584-22571-5-git-send-email-tsimpson@quicinc.com>
Previously the store-conditional code was writing to hex_pred[prednum].
Then, the fGEN_TCG override was reading from there to the destination
variable so that the packet commit logic would handle it properly.
The correct implementation is to write to the destination variable
and don't have the extra read in the override.
Remove the unused arguments from gen_store_conditional[48]
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <1622589584-22571-4-git-send-email-tsimpson@quicinc.com>
Y4_l2fetch == l2fetch(Rs32, Rt32)
Y5_l2fetch == l2fetch(Rs32, Rtt32)
The semantics for these instructions are present, but the encodings
are missing.
Note that these are treated as nops in qemu, so we add overrides.
Test case added to tests/tcg/hexagon/misc.c
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <1622589584-22571-3-git-send-email-tsimpson@quicinc.com>
Change fLSBNEW/fLSBNEW0/fLSBNEW1 from copy to "x & 1"
Remove gen_logical_not function
Clean up fLSBNEWNOT to use andi-1 followed by xori-1
Test cases added to tests/tcg/hexagon/misc.c
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <1622589584-22571-2-git-send-email-tsimpson@quicinc.com>
- Provide a proper PCI-ISA bridge
- Set PCI device IDs correctly
- Pass -nographic flag to PALcode
- Update PALcode to set up the Console Terminal Block
- Honor the Floating-point ENable bit during translate.
-----BEGIN PGP SIGNATURE-----
iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmDZ3eAdHHJpY2hhcmQu
aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV88Zgf/VZd2H9Wm3zUmV8VX
u9IlGuG1GROnMA8w+XakKrjKKuRdE/y3tm9vy1cGtYvekrcrHJcQpjouya6hNhWz
isqStmRDMiSLZ5kPdrnIaJ3+TOmUcp+ZXdxtsW6iZgO/knOeGFxbeZ35kG/6gmvN
AYYegK0vOmCD+Bh9QtlHItDvmCXGQQxWnjlLkRatA0HEoXHLI1r7W3oxCxFF9Hu9
3w/Tvp8rbK7oyVHCVb1ULCTJj4cwl8ZAN/509lUGy9FSZwcLKTi45k0EgAw+yuE8
3HZoON6ZyjvC++cUFJbArJGUEY78QIJWGdtMl1yIJh3V+Jp/shSaKRWSwN7p8kGf
QfVa8Q==
=XV4L
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/rth-gitlab/tags/pull-axp-20210628' into staging
Fixes for NetBSD/alpha:
- Provide a proper PCI-ISA bridge
- Set PCI device IDs correctly
- Pass -nographic flag to PALcode
- Update PALcode to set up the Console Terminal Block
- Honor the Floating-point ENable bit during translate.
# gpg: Signature made Mon 28 Jun 2021 15:34:08 BST
# gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg: issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full]
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F
* remotes/rth-gitlab/tags/pull-axp-20210628:
target/alpha: Honor the FEN bit
pc-bios: Update the palcode-clipper image
hw/alpha: Provide a PCI-ISA bridge device node
hw/alpha: Provide console information to the PALcode at start-up
hw/alpha: Set minimum PCI device ID to 1 to match Clipper IRQ mappings
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This bit is used by NetBSD for lazy fpu migration.
Tested-by: Jason Thorpe <thorpej@me.com>
Reported-by: Jason Thorpe <thorpej@me.com>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/438
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
- Fix MISA in the DisasContext
- Fix GDB CSR XML generation
- QOMify the SiFive UART
- Add support for the OpenTitan timer
-----BEGIN PGP SIGNATURE-----
iQEzBAABCAAdFiEE9sSsRtSTSGjTuM6PIeENKd+XcFQFAmDUc9oACgkQIeENKd+X
cFQnJQf/YJ1DcCc5HKnJD7dKOO7auWGrjcBydVLZpCKT/sBYO2m4+LcUoCkndJst
z2awR2sL6zgTqkpKTFJzENBKcXf0NOAvGvuvAznPQosvW26NhY20EsWHgRxn79DF
2CvFChD4J/aBZa/JwP7232CebsD2IqKn89gP5u6ldFNH36EGpzBRjFOroXLu98x3
arhr7AoyhTTpxcWkWuLW9YVwqZQ8xKKCVTMuqMC8SRI48FUB5+ndy3pTQqIjdoCg
U0wfJIrmPBakw3ik0nbNd47Lu/yxCQMU/O4M/flSbbC1GpomiUotlap9O3LlvNYo
7VeF8eS3/7Okn2/5jEwuFES+MmtUSQ==
=zVjG
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/alistair/tags/pull-riscv-to-apply-20210624-2' into staging
Third RISC-V PR for 6.1 release
- Fix MISA in the DisasContext
- Fix GDB CSR XML generation
- QOMify the SiFive UART
- Add support for the OpenTitan timer
# gpg: Signature made Thu 24 Jun 2021 13:00:26 BST
# gpg: using RSA key F6C4AC46D4934868D3B8CE8F21E10D29DF977054
# gpg: Good signature from "Alistair Francis <alistair@alistair23.me>" [full]
# Primary key fingerprint: F6C4 AC46 D493 4868 D3B8 CE8F 21E1 0D29 DF97 7054
* remotes/alistair/tags/pull-riscv-to-apply-20210624-2:
hw/riscv: OpenTitan: Connect the mtime and mtimecmp timer
hw/timer: Initial commit of Ibex Timer
hw/char/ibex_uart: Make the register layout private
hw/char: QOMify sifive_uart
hw/char: Consistent function names for sifive_uart
target/riscv: gdbstub: Fix dynamic CSR XML generation
target/riscv: Use target_ulong for the DisasContext misa
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Linux 5.14 will add support for nested TSC scaling. Add the
corresponding feature in QEMU; to keep support for existing kernels,
do not add it to any processor yet.
The handling of the VMCS enumeration MSR is ugly; once we have more than
one case, we may want to add a table to check VMX features against.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We don't need to maintain 2 sets of decodetree definitions.
Merge them into a single file.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210617174636.2902654-3-f4bug@amsat.org>
Only trans_MSA() calls gen_msa(), inline it to simplify.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210617174636.2902654-2-f4bug@amsat.org>
Since all entries are no more than 3/4/6 bytes (including nul
terminator), can save space and pie runtime relocations by
declaring regnames[] as array of 3/4/6 const char.
Inspired-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210617174323.2900831-10-f4bug@amsat.org>
Keep host_to_mips_errno[] in .rodata by marking the array const.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210617174323.2900831-9-f4bug@amsat.org>
Per the "MIPS® Architecture Extension: nanoMIPS32 DSP Technical
Reference Manual — Revision 0.04" p. 88 "BPOSGE32C", offset argument (imm)
should be left-shifted first.
This change was tested against test_dsp_r1_bposge32.c DSP test.
Reported-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>
Signed-off-by: Filip Vidojevic <filip.vidojevic@syrmia.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <VI1PR0302MB34869449EE56F226FC3C21129C309@VI1PR0302MB3486.eurprd03.prod.outlook.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
These switch cases for the microMIPS BPOSGE32 / BPOSGE64 opcodes have
been added commit 3c824109da ("target-mips: microMIPS ASE support").
More than 11 years later it is safe to assume there won't be added
soon. The cases fall back to the default which generates a RESERVED
INSTRUCTION, so it is safe to remove them.
Functionally speaking, the patch is a no-op.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210617174323.2900831-8-f4bug@amsat.org>
These placeholder comments for SmartMIPS and MDMX extensions have
been added commit 3c824109da ("target-mips: microMIPS ASE support").
More than 11 years later it is safe to assume there won't be added
soon, so remove these unuseful comments.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210617174323.2900831-7-f4bug@amsat.org>
Commit 043715d1e0 ("target/mips: Update ITU to utilize SAARI
and SAAR CP0 registers") declared itc_reconfigure() in public
namespace, while it is restricted to system emulation.
Similarly commit 5679479b9a ("target/mips: Move CP0 helpers
to sysemu/cp0.c") restricted cpu_mips_soft_irq() definition to
system emulation, but forgot to restrict its declaration.
To avoid polluting user-mode emulation with these declarations,
restrict them to sysemu. Also restrict the sysemu ITU/ITC/IRQ
fields from CPUMIPSState.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210617174323.2900831-6-f4bug@amsat.org>
We moved various TCG source files in commit a2b0a27d33
("target/mips: Move TCG source files under tcg/ sub directory")
but forgot to move the header declaring their prototypes.
Do it now, since all it declares is TCG specific.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210617174323.2900831-5-f4bug@amsat.org>
Commit a2b0a27d33 ("target/mips: Move TCG source files under
tcg/ sub directory") forgot to move the trace-event file.
As it only contains TCG events, move it for consistency.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210617174323.2900831-4-f4bug@amsat.org>
On real hardware an invalid instruction doesn't halt the world,
but usually triggers a RESERVED INSTRUCTION exception.
TCG guest code shouldn't abort QEMU anyway.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210617174323.2900831-2-f4bug@amsat.org>
Per the "MIPS® DSP Module for MIPS64 Architecture" manual, rev. 3.02,
Table 5.3 "SPECIAL3 Encoding of Function Field for DSP Module":
If the Module/ASE is not implemented, executing such an instruction
must cause a Reserved Instruction Exception.
The DINSV instruction lists the following exceptions:
- Reserved Instruction
- DSP Disabled
If the MIPS core doesn't support the DSP module, or the DSP is
disabled, do not handle the '$rt = $0' case as a no-op but raise
the proper exception instead.
Cc: Jia Liu <proljc@gmail.com>
Fixes: 1cb6686cf9 ("target-mips: Add ASE DSP bit/manipulation instructions")
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210529165443.1114402-1-f4bug@amsat.org>
Fix multiple TCG temporary leaks in gen_pool32a5_nanomips_insn().
Fixes: 3285a3e444 ("target/mips: Add emulation of DSP ASE for nanoMIPS - part 1")
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210617174323.2900831-3-f4bug@amsat.org>
Fix a pair of TCG temporary leak when translating nanoMIPS SHILO opcode.
Fixes: 3285a3e444 ("target/mips: Add emulation of DSP ASE for nanoMIPS")
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210530094538.1275329-1-f4bug@amsat.org>