The IOAPIC has an 'Extended Destination ID' field in its RTE, which maps
to bits 11-4 of the MSI address. Since those address bits fall within a
given 4KiB page they were historically non-trivial to use on real hardware.
The Intel IOMMU uses the lowest bit to indicate a remappable format MSI,
and then the remaining 7 bits are part of the index.
Where the remappable format bit isn't set, we can actually use the other
seven to allow external (IOAPIC and MSI) interrupts to reach up to 32768
CPUs instead of just the 255 permitted on bare metal.
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Message-Id: <78097f9218300e63e751e077a0a5ca029b56ba46.camel@infradead.org>
[Fix UBSAN warning. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
For PDEP and PEXT, the mask is provided in the memory (mod+r/m)
operand, and therefore is loaded in s->T0 by gen_ldst_modrm.
The source is provided in the second source operand (VEX.vvvv)
and therefore is loaded in s->T1. Fix the order in which
they are passed to the helpers.
Reported-by: Lenard Szolnoki <blog@lenardszolnoki.com>
Analyzed-by: Lenard Szolnoki <blog@lenardszolnoki.com>
Fixes: https://bugs.launchpad.net/qemu/+bug/1605123
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
For v8.1M the architecture mandates that CPUs must provide at
least the "minimal RAS implementation" from the Reliability,
Availability and Serviceability extension. This consists of:
* an ESB instruction which is a NOP
-- since it is in the HINT space we need only add a comment
* an RFSR register which will RAZ/WI
* a RAZ/WI AIRCR.IESB bit
-- the code which handles writes to AIRCR does not allow setting
of RES0 bits, so we already treat this as RAZ/WI; add a comment
noting that this is deliberate
* minimal implementation of the RAS register block at 0xe0005000
-- this will be in a subsequent commit
* setting the ID_PFR0.RAS field to 0b0010
-- we will do this when we add the Cortex-M55 CPU model
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201119215617.29887-26-peter.maydell@linaro.org
v8.1M introduces a new TRD flag in the CCR register, which enables
checking for stack frame integrity signatures on SG instructions.
Add the code in the SG insn implementation for the new behaviour.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201119215617.29887-24-peter.maydell@linaro.org
v8.1M introduces a new TRD flag in the CCR register, which enables
checking for stack frame integrity signatures on SG instructions.
This bit is not banked, and is always RAZ/WI to Non-secure code.
Adjust the code for handling CCR reads and writes to handle this.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201119215617.29887-23-peter.maydell@linaro.org
v8.1M adds new encodings of VLLDM and VLSTM (where bit 7 is set).
The only difference is that:
* the old T1 encodings UNDEF if the implementation implements 32
Dregs (this is currently architecturally impossible for M-profile)
* the new T2 encodings have the implementation-defined option to
read from memory (discarding the data) or write UNKNOWN values to
memory for the stack slots that would be D16-D31
We choose not to make those accesses, so for us the two
instructions behave identically assuming they don't UNDEF.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201119215617.29887-21-peter.maydell@linaro.org
In v8.1M a new exception return check is added which may cause a NOCP
UsageFault (see rule R_XLTP): before we clear s0..s15 and the FPSCR
we must check whether access to CP10 from the Security state of the
returning exception is disabled; if it is then we must take a fault.
(Note that for our implementation CPPWR is always RAZ/WI and so can
never cause CP10 accesses to fail.)
The other v8.1M change to this register-clearing code is that if MVE
is implemented VPR must also be cleared, so add a TODO comment to
that effect.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201119215617.29887-20-peter.maydell@linaro.org
In v8.1M, vector table fetch failures don't set HFSR.FORCED (see rule
R_LLRP). (In previous versions of the architecture this was either
required or IMPDEF.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201119215617.29887-18-peter.maydell@linaro.org
In v8.0M, on exception entry the registers R0-R3, R12, APSR and EPSR
are zeroed for an exception taken to Non-secure state; for an
exception taken to Secure state they become UNKNOWN, and we chose to
leave them at their previous values.
In v8.1M the behaviour is specified more tightly and these registers
are always zeroed regardless of the security state that the exception
targets (see rule R_KPZV). Implement this.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201119215617.29887-17-peter.maydell@linaro.org
The FPDSCR register has a similar layout to the FPSCR. In v8.1M it
gains new fields FZ16 (if half-precision floating point is supported)
and LTPSIZE (always reads as 4). Update the reset value and the code
that handles writes to this register accordingly.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201119215617.29887-16-peter.maydell@linaro.org
Implement the new-in-v8.1M FPCXT_S floating point system register.
This is for saving and restoring the secure floating point context,
and it reads and writes bits [27:0] from the FPSCR and the
CONTROL.SFPA bit in bit [31].
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201119215617.29887-14-peter.maydell@linaro.org
Factor out the code which handles M-profile lazy FP state preservation
from full_vfp_access_check(); accesses to the FPCXT_NS register are
a special case which need to do just this part (corresponding in the
pseudocode to the PreserveFPState() function), and not the full
set of actions matching the pseudocode ExecuteFPCheck() which
normal FP instructions need to do.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20201119215617.29887-13-peter.maydell@linaro.org
We defined a constant name for the mask of NZCV bits in the FPCR/FPSCR
in the previous commit; use it in a couple of places in existing code,
where we're masking out everything except NZCV for the "load to Rt=15
sets CPSR.NZCV" special case.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201119215617.29887-12-peter.maydell@linaro.org
v8.1M defines a new FP system register FPSCR_nzcvqc; this behaves
like the existing FPSCR, except that it reads and writes only bits
[31:27] of the FPSCR (the N, Z, C, V and QC flag bits). (Unlike the
FPSCR, the special case for Rt=15 of writing the CPSR.NZCV is not
permitted.)
Implement the register. Since we don't yet implement MVE, we handle
the QC bit as RES0, with todo comments for where we will need to add
support later.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201119215617.29887-11-peter.maydell@linaro.org
Implement the new-in-v8.1M VLDR/VSTR variants which directly
read or write FP system registers to memory.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201119215617.29887-10-peter.maydell@linaro.org
The constant-expander functions like negate, plus_2, etc, are
generally useful; move them up in translate.c so we can use them in
the VFP/Neon decoders as well as in the A32/T32/T16 decoders.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201119215617.29887-9-peter.maydell@linaro.org
Currently M-profile borrows the A-profile code for VMSR and VMRS
(access to the FP system registers), because all it needs to support
is the FPSCR. In v8.1M things become significantly more complicated
in two ways:
* there are several new FP system registers; some have side effects
on read, and one (FPCXT_NS) needs to avoid the usual
vfp_access_check() and the "only if FPU implemented" check
* all sysregs are now accessible both by VMRS/VMSR (which
reads/writes a general purpose register) and also by VLDR/VSTR
(which reads/writes them directly to memory)
Refactor the structure of how we handle VMSR/VMRS to cope with this:
* keep the M-profile code entirely separate from the A-profile code
* abstract out the "read or write the general purpose register" part
of the code into a loadfn or storefn function pointer, so we can
reuse it for VLDR/VSTR.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201119215617.29887-8-peter.maydell@linaro.org
For M-profile before v8.1M, the only valid register for VMSR/VMRS is
the FPSCR. We have a comment that states this, but the actual logic
to forbid accesses for any other register value is missing, so we
would end up with A-profile style behaviour. Add the missing check.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201119215617.29887-7-peter.maydell@linaro.org
In v8.1M the new CLRM instruction allows zeroing an arbitrary set of
the general-purpose registers and APSR. Implement this.
The encoding is a subset of the LDMIA T2 encoding, using what would
be Rn=0b1111 (which UNDEFs for LDMIA).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201119215617.29887-6-peter.maydell@linaro.org
Implement the v8.1M VSCCLRM insn, which zeros floating point
registers if there is an active floating point context.
This requires support in write_neon_element32() for the MO_32
element size, so add it.
Because we want to use arm_gen_condlabel(), we need to move
the definition of that function up in translate.c so it is
before the #include of translate-vfp.c.inc.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201119215617.29887-5-peter.maydell@linaro.org
In arm_cpu_realizefn() we check whether the board code disabled EL3
via the has_el3 CPU object property, which we create if the CPU
starts with the ARM_FEATURE_EL3 feature bit. If it is disabled, then
we turn off ARM_FEATURE_EL3 and also zero out the relevant fields in
the ID_PFR1 and ID_AA64PFR0 registers.
This codepath was incorrectly being taken for M-profile CPUs, which
do not have an EL3 and don't set ARM_FEATURE_EL3, but which may have
the M-profile Security extension and so should have non-zero values
in the ID_PFR1.Security field.
Restrict the handling of the feature flag to A/R-profile cores.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201119215617.29887-4-peter.maydell@linaro.org
In v8.1M the PXN architecture extension adds a new PXN bit to the
MPU_RLAR registers, which forbids execution of code in the region
from a privileged mode.
This is another feature which is just in the generic "in v8.1M" set
and has no ID register field indicating its presence.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201119215617.29887-3-peter.maydell@linaro.org
Implement the "Load VSX Vector Word & Splat Indexed" opcode, introduced
in Power ISA v3.0.
Buglink: https://bugs.launchpad.net/qemu/+bug/1793608
Signed-off-by: Giuseppe Musacchio <thatlemon@gmail.com>
Message-Id: <d7d533e18c2bc10d924ee3e09907ff2b41fddb3a.1604912739.git.thatlemon@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The semihosting SYS_HEAPINFO call is supposed to return an array
of four guest addresses:
* base of heap memory
* limit of heap memory
* base of stack memory
* limit of stack memory
Some semihosting programs (including those compiled to use the
'newlib' embedded C library) use this call to work out where they
should initialize themselves to.
QEMU's implementation when in system emulation mode is very
simplistic: we say that the heap starts halfway into RAM and
continues to the end of RAM, and the stack starts at the top of RAM
and works down to the bottom. Unfortunately the code assumes that
the base address of RAM is at address 0, so on boards like 'virt'
where this is not true the addresses returned will all be wrong and
the guest application will usually crash.
Conveniently since all Arm boards call arm_load_kernel() we have the
base address of the main RAM block in the arm_boot_info struct which
is accessible via the CPU object. Use this to return sensible values
from SYS_HEAPINFO.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20201119092346.32356-1-peter.maydell@linaro.org
Using a target unsigned long would limit the Input Address to a LPAE
page-walk to 32 bits on AArch32 and 64 bits on AArch64. This is okay
for stage 1 or on AArch64, but it is insufficient for stage 2 on
AArch32. In that later case, the Input Address can have up to 40 bits.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201118150414.18360-1-remi@remlab.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The Control Program Name Code (CPNC) portion of the diag318
info must be set within the SIE block of each VCPU in the
configuration. The handler will iterate through each VCPU
and dirty the diag318_info reg to be synced with KVM on a
subsequent sync_regs call.
Additionally, the diag318 info resets must be handled via
userspace. As such, QEMU will reset this value for each
VCPU during a modified clear, load normal, and load clear
reset event.
Fixes: fabdada935 ("s390: guest support for diagnose 0x318")
Signed-off-by: Collin Walling <walling@linux.ibm.com>
Message-Id: <20201113221022.257054-1-walling@linux.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Janosch Frank <frankja@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The Requested Privilege Level field is 2 bits, the Table Indicator field
is 1 bit and the Index field is the remaining 15 bits, with TI=0 meaning
GDT and TI=1 meaning LDT.
Signed-off-by: Jessica Clarke <jrtc27@jrtc27.com>
Message-Id: <20201116200414.28286-1-jrtc27@jrtc27.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* Update NetBSD VM to version 9.1
* Misc fixes (e.g. categorize some devices)
-----BEGIN PGP SIGNATURE-----
iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAl+zld8RHHRodXRoQHJl
ZGhhdC5jb20ACgkQLtnXdP5wLbWb8Q/+IRvGUGjGcPfbTvwoOqJVy4Hm7huW5i1s
wHe/6nitNtpvaAqcxbQHBIvWX9xTzppWcFiEkIs8rPwLOUFKh5xJ+NbEdf4acQaJ
m4G2mEY5bYt/o5e6p7ZK1RgS2EjD1eQ6BwMWQKeUHET7MTv0UabKtvWmBWpMqFxA
vl/3SbVWsSwGB9gOA5oksYhKY5ZRcVaDxsGk89f7iwgaStcxWNxVFEXddbBmqhfW
Q4ZPt0K7yod7NDBOaGEoc2hOjIfr0TvovHojDuAxt+2tKdYi1vwtnwKbFqTWp7Ca
7ttzoQUSsteiOjAhHRpa2PEbfrNs+loIm9fem5fQ9i7POlbS/Ozv2RnPCZm1X8pW
n7Jvsh25V066AFnHat7PnjcBVBRFfmR3xtA61PqvAMGEKW8tortbZbpqXO18Pv5p
6P/GG9G3QE0v2rEsU5BNFWp/aD7fiWy/VPu3dGFUkI9/S3biatocldHn/9eyXz94
k75Xzhe5x6n5Jf8QYFQ/6BO0qSoidNbAVg1W8+QyRXIJJhWRnvW9eYa7tSx5ezJg
5+oCo4oh6Qd9nvrl5pIwvX6QMDf2kPxzp7PsHeemqt7+QNmXErAVsIi1HUVsLWRP
Qb/BbKyKNeWJwvWWLAm/2kXVmNQfjLVNCwg04xa8tkQemhIDekVrCpMoX3cNHjWf
EWa1vEtbq9k=
=A3/B
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/huth-gitlab/tags/pull-request-2020-11-17' into staging
* Fixes for compiling on Haiku, and add Haiku VM for compile-testing
* Update NetBSD VM to version 9.1
* Misc fixes (e.g. categorize some devices)
# gpg: Signature made Tue 17 Nov 2020 09:20:31 GMT
# gpg: using RSA key 27B88847EEE0250118F3EAB92ED9D774FE702DB5
# gpg: issuer "thuth@redhat.com"
# gpg: Good signature from "Thomas Huth <th.huth@gmx.de>" [full]
# gpg: aka "Thomas Huth <thuth@redhat.com>" [full]
# gpg: aka "Thomas Huth <huth@tuxfamily.org>" [full]
# gpg: aka "Thomas Huth <th.huth@posteo.de>" [unknown]
# Primary key fingerprint: 27B8 8847 EEE0 2501 18F3 EAB9 2ED9 D774 FE70 2DB5
* remotes/huth-gitlab/tags/pull-request-2020-11-17:
max111x: put it into the 'misc' category
nand: put it into the 'storage' category
ads7846: put it into the 'input' category
ssd0323: put it into the 'display' category
gitlab-ci: Use $CI_REGISTRY instead of hard-coding registry.gitlab.com
target/microblaze: Fix possible array out of bounds in mmu_write()
tests/vm: update NetBSD to 9.1
tests/vm: Add Haiku test based on their vagrant images
configure: Add a proper check for sys/ioccom.h and use it in tpm_ioctl.h
configure: Do not build pc-bios/optionrom on Haiku
configure: Fix the _BSD_SOURCE define for the Haiku build
qemu/bswap: Remove unused qemu_bswap_len()
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In the mtspr helper we attempt to check for "is the timer disabled"
with "if (env->ttmr & TIMER_NONE)". This is wrong because TIMER_NONE
is zero and the condition is always false (Coverity complains about
the dead code.)
The correct check would be to test whether the TTMR_M field in the
register is equal to TIMER_NONE instead. However, the
cpu_openrisc_timer_update() function checks whether the timer is
enabled (it looks at cpu->env.is_counting, which is set to 0 via
cpu_openrisc_count_stop() when the TTMR_M field is set to
TIMER_NONE), so there's no need to check for "timer disabled" in the
target/openrisc code. Instead, simply remove the dead code.
Fixes: Coverity CID 1005812
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Stafford Horne <shorne@gmail.com>
Message-id: 20201103114654.18540-1-peter.maydell@linaro.org
The size of env->mmu.regs is 3, but the range of 'rn' is [0, 5].
To avoid data access out of bounds, only if 'rn' is less than 3, we
can print env->mmu.regs[rn]. In other cases, we can print
env->mmu.regs[MMU_R_TLBX].
Reported-by: Euler Robot <euler.robot@huawei.com>
Signed-off-by: Alex Chen <alex.chen@huawei.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-Id: <5FA10ABA.1080109@huawei.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
g_strdup_printf is used twice to write to the same variable, which
can theoretically cause a leak. In practice, it is extremely
unlikely that a guest is seeing a recursive MCE and has disabled
CR4.MCE between the first and the second error, but we can fix it
and we can also make a slight improvement on the logic: CR4.MCE=0
causes a triple fault even for a non-recursive machine check, so
let's place its test first.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Currently, the nested state format is hardcoded to VMX. This will result
in kvm_put_nested_state() returning an error because the KVM SVM support
checks for the nested state to be KVM_STATE_NESTED_FORMAT_SVM. As a
result, kvm_arch_put_registers() errors out early.
Update the setting of the format based on the virtualization feature:
VMX - KVM_STATE_NESTED_FORMAT_VMX
SVM - KVM_STATE_NESTED_FORMAT_SVM
Also, fix the code formatting while at it.
Fixes: b16c0e20c7 ("KVM: add support for AMD nested live migration")
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Message-Id: <fe53d00fe0d884e812960781284cd48ae9206acc.1605546140.git.thomas.lendacky@amd.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
There is no "version 2" of the "Lesser" General Public License.
It is either "GPL version 2.0" or "Lesser GPL version 2.1".
This patch replaces all occurrences of "Lesser GPL version 2" with
"Lesser GPL version 2.1" in comment section.
This patch contains all the files, whose maintainer I could not get
from ‘get_maintainer.pl’ script.
Signed-off-by: Chetan Pant <chetan4windows@gmail.com>
Message-Id: <20201023124424.20177-1-chetan4windows@gmail.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
[thuth: Adapted exec.c and qdev-monitor.c to new location]
Signed-off-by: Thomas Huth <thuth@redhat.com>
There is no "version 2" of the "Lesser" General Public License.
It is either "GPL version 2.0" or "Lesser GPL version 2.1".
This patch replaces all occurrences of "Lesser GPL version 2" with
"Lesser GPL version 2.1" in comment section.
Signed-off-by: Chetan Pant <chetan4windows@gmail.com>
Message-Id: <20201023124235.20130-1-chetan4windows@gmail.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
There is no "version 2" of the "Lesser" General Public License.
It is either "GPL version 2.0" or "Lesser GPL version 2.1".
This patch replaces all occurrences of "Lesser GPL version 2" with
"Lesser GPL version 2.1" in comment section.
Signed-off-by: Chetan Pant <chetan4windows@gmail.com>
Message-Id: <20201023124012.20035-1-chetan4windows@gmail.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
There is no "version 2" of the "Lesser" General Public License.
It is either "GPL version 2.0" or "Lesser GPL version 2.1".
This patch replaces all occurrences of "Lesser GPL version 2" with
"Lesser GPL version 2.1" in comment section.
Signed-off-by: Chetan Pant <chetan4windows@gmail.com>
Message-Id: <20201023123353.19796-1-chetan4windows@gmail.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
There is no "version 2" of the "Lesser" General Public License.
It is either "GPL version 2.0" or "Lesser GPL version 2.1".
This patch replaces all occurrences of "Lesser GPL version 2" with
"Lesser GPL version 2.1" in comment section.
Signed-off-by: Chetan Pant <chetan4windows@gmail.com>
Message-Id: <20201023122913.19561-1-chetan4windows@gmail.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
There is no "version 2" of the "Lesser" General Public License.
It is either "GPL version 2.0" or "Lesser GPL version 2.1".
This patch replaces all occurrences of "Lesser GPL version 2" with
"Lesser GPL version 2.1" in comment section.
Signed-off-by: Chetan Pant <chetan4windows@gmail.com>
Message-Id: <20201023122801.19514-1-chetan4windows@gmail.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
There is no "version 2" of the "Lesser" General Public License.
It is either "GPL version 2.0" or "Lesser GPL version 2.1".
This patch replaces all occurrences of "Lesser GPL version 2" with
"Lesser GPL version 2.1" in comment section.
Signed-off-by: Chetan Pant <chetan4windows@gmail.com>
Message-Id: <20201023122157.19321-1-chetan4windows@gmail.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Acked-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Thomas Huth <thuth@redhat.com>
There is no "version 2" of the "Lesser" General Public License.
It is either "GPL version 2.0" or "Lesser GPL version 2.1".
This patch replaces all occurrences of "Lesser GPL version 2" with
"Lesser GPL version 2.1" in comment section.
Signed-off-by: Chetan Pant <chetan4windows@gmail.com>
Message-Id: <20201023122051.19274-1-chetan4windows@gmail.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
There is no "version 2" of the "Lesser" General Public License.
It is either "GPL version 2.0" or "Lesser GPL version 2.1".
This patch replaces all occurrences of "Lesser GPL version 2" with
"Lesser GPL version 2.1" in comment section.
Signed-off-by: Chetan Pant <chetan4windows@gmail.com>
Message-Id: <20201023121821.19179-1-chetan4windows@gmail.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
There is no "version 2" of the "Lesser" General Public License.
It is either "GPL version 2.0" or "Lesser GPL version 2.1".
This patch replaces all occurrences of "Lesser GPL version 2" with
"Lesser GPL version 2.1" in comment section.
Signed-off-by: Chetan Pant <chetan4windows@gmail.com>
Message-Id: <20201023121649.19123-1-chetan4windows@gmail.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
There is no "version 2" of the "Lesser" General Public License.
It is either "GPL version 2.0" or "Lesser GPL version 2.1".
This patch replaces all occurrences of "Lesser GPL version 2" with
"Lesser GPL version 2.1" in comment section.
Signed-off-by: Chetan Pant <chetan4windows@gmail.com>
Message-Id: <20201019061126.3102-1-chetan4windows@gmail.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
mon_get_cpu_env() is indirectly called monitor_parse_arguments() where
the current monitor isn't set yet. Instead of using monitor_cur_env(),
explicitly pass the Monitor pointer to the function.
Without this fix, an HMP command like "x $pc" crashes like this:
#0 0x0000555555caa01f in mon_get_cpu_sync (mon=0x0, synchronize=true) at ../monitor/misc.c:270
#1 0x0000555555caa141 in mon_get_cpu (mon=0x0) at ../monitor/misc.c:294
#2 0x0000555555caa158 in mon_get_cpu_env () at ../monitor/misc.c:299
#3 0x0000555555b19739 in monitor_get_pc (mon=0x555556ad2de0, md=0x5555565d2d40 <monitor_defs+1152>, val=0) at ../target/i386/monitor.c:607
#4 0x0000555555cadbec in get_monitor_def (mon=0x555556ad2de0, pval=0x7fffffffc208, name=0x7fffffffc220 "pc") at ../monitor/misc.c:1681
#5 0x000055555582ec4f in expr_unary (mon=0x555556ad2de0) at ../monitor/hmp.c:387
#6 0x000055555582edbb in expr_prod (mon=0x555556ad2de0) at ../monitor/hmp.c:421
#7 0x000055555582ee79 in expr_logic (mon=0x555556ad2de0) at ../monitor/hmp.c:455
#8 0x000055555582eefe in expr_sum (mon=0x555556ad2de0) at ../monitor/hmp.c:484
#9 0x000055555582efe8 in get_expr (mon=0x555556ad2de0, pval=0x7fffffffc418, pp=0x7fffffffc408) at ../monitor/hmp.c:511
#10 0x000055555582fcd4 in monitor_parse_arguments (mon=0x555556ad2de0, endp=0x7fffffffc890, cmd=0x555556675b50 <hmp_cmds+7920>) at ../monitor/hmp.c:876
#11 0x00005555558306a8 in handle_hmp_command (mon=0x555556ad2de0, cmdline=0x555556ada452 "$pc") at ../monitor/hmp.c:1087
#12 0x000055555582df14 in monitor_command_cb (opaque=0x555556ad2de0, cmdline=0x555556ada450 "x $pc", readline_opaque=0x0) at ../monitor/hmp.c:47
After this fix, nothing is left in monitor_parse_arguments() that can
indirectly call monitor_cur(), so the fix is complete.
Fixes: ff04108a0e
Reported-by: lichun <lichun@ruijie.com.cn>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20201113114326.97663-4-kwolf@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
All of these callbacks use mon_get_cpu_env(). Pass the Monitor
pointer to them it in preparation for adding a monitor argument to
mon_get_cpu_env().
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20201113114326.97663-3-kwolf@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
mon_get_cpu() is indirectly called monitor_parse_arguments() where
the current monitor isn't set yet. Instead of using monitor_cur(),
explicitly pass the Monitor pointer to the function.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20201113114326.97663-2-kwolf@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
* Oss-fuzz updates
* Publish the docs built during gitlab CI to the user's gitlab.io page
* Update the OpenBSD VM test to v6.8
* Fix the device-crash-test script to run with the meson build system
* Some small s390x fixes
-----BEGIN PGP SIGNATURE-----
iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAl+qc+IRHHRodXRoQHJl
ZGhhdC5jb20ACgkQLtnXdP5wLbUPfg//Z4AbWXNwldRTTZM/61kY88meNT6jHVaF
narl6v1u+1oLeSlwoHaZb7PZ3auxAuukROaiUWqIzn9udpyvRBB6pXz+JzGpwmWk
DUady8vYXg/Q9Yux8MZMgKzhOH484eVM1y9NXQ+lzJT7R3VtbcN0IvlFv20M5d+k
S449CuAlXH5b42A6OUpABZYfxm59lWZW91qdejRnLOyqyLrv5GPhro5J3Uadv3l2
lHExdzjSCBovK3ORCR9L6JghLMmN3M6a9VArNas6KTSLvz5Vh/Rnz5otfpf5DiHv
D/FOXQjm1aujCmfa41oBZH84aGhZrdK/fhtryJE+OBa7a5mpSOv5mXg7IFKDeCRd
p0bmLOo891LdzOHlkZ/p+8uK3CkdsJpxch0/8mLLLvgJCGCNpeq7GJXj7wuPa2RK
gc2Ax+XoOR/C5f+x4PbRjHYgI3tTbO8MM9G87QT8TvV3llwpeFs7LRjegoanoAxn
OA0hVp+4dEXBPJF0bludSNnJ+NPDUWDKVPW/sTlaVytxSgFw0v5bBtne+CUY8QGN
xNX9T3qsSN56phaZlSzB0ktJKid8nvqEBp82/ayAVyhf6cYk4p08f+cS/qb0RpMm
5J7GbmB54HWq8Is6BjG261BvnKfEdhl58YIREbgY0gQopGDXZzazozIysNKcZAnI
WzL1cfWPWwc=
=a3YE
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/huth-gitlab/tags/pull-request-2020-11-10' into staging
* Some small qtest fixes
* Oss-fuzz updates
* Publish the docs built during gitlab CI to the user's gitlab.io page
* Update the OpenBSD VM test to v6.8
* Fix the device-crash-test script to run with the meson build system
* Some small s390x fixes
# gpg: Signature made Tue 10 Nov 2020 11:05:06 GMT
# gpg: using RSA key 27B88847EEE0250118F3EAB92ED9D774FE702DB5
# gpg: issuer "thuth@redhat.com"
# gpg: Good signature from "Thomas Huth <th.huth@gmx.de>" [full]
# gpg: aka "Thomas Huth <thuth@redhat.com>" [full]
# gpg: aka "Thomas Huth <huth@tuxfamily.org>" [full]
# gpg: aka "Thomas Huth <th.huth@posteo.de>" [unknown]
# Primary key fingerprint: 27B8 8847 EEE0 2501 18F3 EAB9 2ED9 D774 FE70 2DB5
* remotes/huth-gitlab/tags/pull-request-2020-11-10:
s390x: Avoid variable size warning in ipl.h
s390x: fix clang 11 warnings in cpu_models.c
qtest: Update references to parse_escape() in comments
fuzz: add virtio-blk fuzz target
docs: add "page source" link to sphinx documentation
gitlab: force enable docs build in Fedora, Ubuntu, Debian
gitlab: publish the docs built during CI
configure: surface deprecated targets in the help output
fuzz: Make fork_fuzz.ld compatible with LLVM's LLD
scripts/oss-fuzz: give all fuzzers -target names
docs/fuzz: update fuzzing documentation post-meson
docs/fuzz: rST-ify the fuzzing documentation
MAINTAINERS: Add gitlab-pipeline-status script to GitLab CI section
gitlab-ci: Drop generic cache rule
tests/qtest/tpm: Remove redundant check in the tpm_test_swtpm_test()
qtest: Fix bad printf format specifiers
device-crash-test: Check if path is actually an executable file
tests/vm: update openbsd to release 6.8
meson: always include contrib/libvhost-user
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Checks for UNDEF cases should go before the "is VFP enabled?" access
check, except in special cases. Move a stray UNDEF check in the VTBL
trans function up above the access check.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201109145324.2859-1-peter.maydell@linaro.org
The helper function did not get updated when we reorganized
the vector register file for SVE. Since then, the neon dregs
are non-sequential and cannot be simply indexed.
At the same time, make the helper function operate on 64-bit
quantities so that we do not have to call it twice.
Fixes: c39c2b9043
Reported-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
[PMM: use aa32_vfp_dreg() rather than opencoding]
Message-id: 20201105171126.88014-1-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Fix code style. Space required before the open parenthesis '('.
Signed-off-by: Xinhao Zhang <zhangxinhao1@huawei.com>
Signed-off-by: Kai Deng <dengkai1@huawei.com>
Message-id: 20201103114529.638233-3-zhangxinhao1@huawei.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Fix code style. Don't use '#' flag of printf format ('%#') in
format strings, use '0x' prefix instead
Signed-off-by: Xinhao Zhang <zhangxinhao1@huawei.com>
Signed-off-by: Kai Deng <dengkai1@huawei.com>
Message-id: 20201103114529.638233-2-zhangxinhao1@huawei.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Ibex PLIC, the other fixes the Hypvervisor access functions.
-----BEGIN PGP SIGNATURE-----
iQEzBAABCAAdFiEE9sSsRtSTSGjTuM6PIeENKd+XcFQFAl+qDs0ACgkQIeENKd+X
cFR/OwgAra3yNXMLZKHSsQt/fGeymv5gfWCbS3T4ncOLUcaC3IXEaf7udmWVhC4S
g9g6OhbO2jofRvSn0t0hZHAAnbTizKwesRXMQFfqAERn66Aa+yHmZYjsJrRHlwMf
MC3XBO5kgDISwvq7/CEI/RO8el2lEScuH9Mdc7cgDnoPpdX1Vy9Hl5RaDdqCHQck
XcqSnpjRkVQ8pKK6OhvDfD/Al5olKoHFR8k3gy+TrSpVsGDubGljOKUY/m7Ihs0m
ZVZMGbn+BISiFnqtoqb9O29ZxMPZv3tolmRPbT7d+4RqgK2cQ3WS63jXejii21ew
KCWv8CSWyGjMo5td+d6V6QXkPNifpg==
=75u0
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/alistair/tags/pull-riscv-to-apply-20201109' into staging
This fixes two bugs in the RISC-V port. One is a bug in the
Ibex PLIC, the other fixes the Hypvervisor access functions.
# gpg: Signature made Tue 10 Nov 2020 03:53:49 GMT
# gpg: using RSA key F6C4AC46D4934868D3B8CE8F21E10D29DF977054
# gpg: Good signature from "Alistair Francis <alistair@alistair23.me>" [full]
# Primary key fingerprint: F6C4 AC46 D493 4868 D3B8 CE8F 21E1 0D29 DF97 7054
* remotes/alistair/tags/pull-riscv-to-apply-20201109:
hw/intc/ibex_plic: Clear the claim register when read
target/riscv: Split the Hypervisor execute load helpers
target/riscv: Remove the hyp load and store functions
target/riscv: Remove the HS_TWO_STAGE flag
target/riscv: Set the virtualised MMU mode when doing hyp accesses
target/riscv: Add a virtualised MMU Mode
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There are void * pointers that get casted to enums, in cpu_models.c
Such casts can result in a small integer type and are caught as
warnings with clang, starting with version 11:
Clang 11 finds a bunch of spots in the code that trigger this new warnings:
../qemu-base/target/s390x/cpu_models.c:985:21: error: cast to smaller integer type 'S390Feat' from 'void *' [-Werror,-Wvoid-pointer-to-enum-cast]
S390Feat feat = (S390Feat) opaque;
^~~~~~~~~~~~~~~~~
../qemu-base/target/s390x/cpu_models.c:1002:21: error: cast to smaller integer type 'S390Feat' from 'void *' [-Werror,-Wvoid-pointer-to-enum-cast]
S390Feat feat = (S390Feat) opaque;
^~~~~~~~~~~~~~~~~
../qemu-base/target/s390x/cpu_models.c:1036:27: error: cast to smaller integer type 'S390FeatGroup' from 'void *' [-Werror,-Wvoid-pointer-to-enum-cast]
S390FeatGroup group = (S390FeatGroup) opaque;
^~~~~~~~~~~~~~~~~~~~~~
../qemu-base/target/s390x/cpu_models.c:1057:27: error: cast to smaller integer type 'S390FeatGroup' from 'void *' [-Werror,-Wvoid-pointer-to-enum-cast]
S390FeatGroup group = (S390FeatGroup) opaque;
^~~~~~~~~~~~~~~~~~~~~~
4 errors generated.
Avoid this warning by casting the pointer to uintptr_t first.
Signed-off-by: Daniele Buono <dbuono@linux.vnet.ibm.com>
Message-Id: <20201105221905.1350-3-dbuono@linux.vnet.ibm.com>
Acked-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Split the hypervisor execute load functions into two seperate functions.
This avoids us having to pass the memop to the C helper functions.
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 5b1550f0faa3c435cc77f3c1ae811dea98ab9e36.1604464950.git.alistair.francis@wdc.com
Remove the special Virtulisation load and store functions and just use
the standard tcg tcg_gen_qemu_ld_tl() and tcg_gen_qemu_st_tl() functions
instead.
As part of this change we ensure we still run an access check to make
sure we can perform the operations.
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 189ac3e53ef2854824d18aad7074c6649f17de2c.1604464950.git.alistair.francis@wdc.com
The HS_TWO_STAGE flag is no longer required as the MMU index contains
the information if we are performing a two stage access.
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: f514b128b1ff0fb41c85f914cee18f905007a922.1604464950.git.alistair.francis@wdc.com
When performing the hypervisor load/store operations set the MMU mode to
indicate that we are virtualised.
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: e411c61a1452cad16853f13cac2fb86dc91ebee8.1604464950.git.alistair.francis@wdc.com
Our current code assumed the target page size is always 4k
when handling PageMask and VPN2, however, variable page size
was just added to mips target and that's no longer true.
Fixes: ee3863b9d4 ("target/mips: Support variable page size")
Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Signed-off-by: Huacai Chen <chenhc@lemote.com>
Message-Id: <1604636510-8347-2-git-send-email-chenhc@lemote.com>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
[PMD: Replaced find_first_zero_bit() by cto32()]
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
This patch adds some gen_io_start() calls to allow execution
of s390x targets in icount mode with -smp 1.
It enables deterministic timers and record/replay features.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Pavel Dovgalyuk <pavel.dovgalyuk@ispras.ru>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Hildenbrand <david@redhat.com>
Message-Id: <160455551747.32240.17074484658979970129.stgit@pasha-ThinkPad-X280>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
When using -Wimplicit-fallthrough in our CFLAGS, the compiler showed warning:
../target/ppc/excp_helper.c: In function ‘powerpc_excp’:
../target/ppc/excp_helper.c:529:13: warning: this statement may fall through [-Wimplicit-fallthrough=]
529 | msr |= env->error_code;
| ~~~~^~~~~~~~~~~~~~~~~~
../target/ppc/excp_helper.c:530:5: note: here
530 | case POWERPC_EXCP_HDECR: /* Hypervisor decrementer exception */
| ^~~~
Add the corresponding "fall through" comment to fix it.
Reported-by: Euler Robot <euler.robot@huawei.com>
Signed-off-by: Chen Qun <kuhn.chenqun@huawei.com>
Message-Id: <20201028055107.2170401-1-kuhn.chenqun@huawei.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
MIPSR6 (not only MIPS32R6) processors support unaligned access in
hardware, so set MO_UNALN in their default_tcg_memop_mask. Btw, new
Loongson-3 (such as Loongson-3A4000) also support unaligned access,
since both old and new Loongson-3 use the same binaries, we can simply
set MO_UNALN for all Loongson-3 processors.
Signed-off-by: Huacai Chen <chenhc@lemote.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <1604053541-27822-3-git-send-email-chenhc@lemote.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
There is no "version 2" of the "Lesser" General Public License.
It is either "GPL version 2.0" or "Lesser GPL version 2.1".
This patch replaces all occurrences of "Lesser GPL version 2" with
"Lesser GPL version 2.1" in comment section.
Signed-off-by: Chetan Pant <chetan4windows@gmail.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20201016143509.26692-1-chetan4windows@gmail.com>
[PMD: Split hw/ vs target/]
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Fix code style. Space required before the open parenthesis '('.
Signed-off-by: Xinhao Zhang <zhangxinhao1@huawei.com>
Signed-off-by: Kai Deng <dengkai1@huawei.com>
Reported-by: Euler Robot <euler.robot@huawei.com>
Reviewed-by: Bin Meng <bin.meng@windriver.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20201030004815.4172849-1-zhangxinhao1@huawei.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
In the case of supporting V extension, add V extension description
to vmstate_riscv_cpu.
Signed-off-by: Yifei Jiang <jiangyifei@huawei.com>
Signed-off-by: Yipeng Yin <yinyipeng1@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20201026115530.304-6-jiangyifei@huawei.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
In the case of supporting H extension, add H extension description
to vmstate_riscv_cpu.
Signed-off-by: Yifei Jiang <jiangyifei@huawei.com>
Signed-off-by: Yipeng Yin <yinyipeng1@huawei.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20201026115530.304-5-jiangyifei@huawei.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
In the case of supporting PMP feature, add PMP state description
to vmstate_riscv_cpu.
'vmstate_pmp_addr' and 'num_rules' could be regenerated by
pmp_update_rule(). But there exists the problem of updating
num_rules repeatedly in pmp_update_rule(). So here extracts
pmp_update_rule_addr() and pmp_update_rule_nums() to update
'vmstate_pmp_addr' and 'num_rules' respectively.
Signed-off-by: Yifei Jiang <jiangyifei@huawei.com>
Signed-off-by: Yipeng Yin <yinyipeng1@huawei.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20201026115530.304-4-jiangyifei@huawei.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Add basic CPU state description to the newly created machine.c
Signed-off-by: Yifei Jiang <jiangyifei@huawei.com>
Signed-off-by: Yipeng Yin <yinyipeng1@huawei.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20201026115530.304-3-jiangyifei@huawei.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
mstatus/mstatush and vsstatus/vsstatush are two halved for RISCV32.
This patch expands mstatus and vsstatus to uint64_t instead of
target_ulong so that it can be saved as one unit and reduce some
ifdefs in the code.
Signed-off-by: Yifei Jiang <jiangyifei@huawei.com>
Signed-off-by: Yipeng Yin <yinyipeng1@huawei.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20201026115530.304-2-jiangyifei@huawei.com
In arm_v7m_mmu_idx_for_secstate() we get the 'priv' level to pass to
armv7m_mmu_idx_for_secstate_and_priv() by calling arm_current_el().
This is incorrect when the security state being queried is not the
current one, because arm_current_el() uses the current security state
to determine which of the banked CONTROL.nPRIV bits to look at.
The effect was that if (for instance) Secure state was in privileged
mode but Non-Secure was not then we would return the wrong MMU index.
The only places where we are using this function in a way that could
trigger this bug are for the stack loads during a v8M function-return
and for the instruction fetch of a v8M SG insn.
Fix the bug by expanding out the M-profile version of the
arm_current_el() logic inline so it can use the passed in secstate
rather than env->v7m.secure.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201022164408.13214-1-peter.maydell@linaro.org
Secure mode is not exempted from checking SCR_EL3.TLOR, and in the
future HCR_EL2.TLOR when S-EL2 is enabled.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
HCR should be applied when NS is set, not when it is cleared.
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The helper functions for performing the udot/sdot operations against
a scalar were not using an address-swizzling macro when converting
the index of the scalar element into a pointer into the vm array.
This had no effect on little-endian hosts but meant we generated
incorrect results on big-endian hosts.
For these insns, the index is indexing over group of 4 8-bit values,
so 32 bits per indexed entity, and H4() is therefore what we want.
(For Neon the only possible input indexes are 0 and 1.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20201028191712.4910-3-peter.maydell@linaro.org
In the neon_padd/pmax/pmin helpers for float16, a cut-and-paste error
meant we were using the H4() address swizzler macro rather than the
H2() which is required for 2-byte data. This had no effect on
little-endian hosts but meant we put the result data into the
destination Dreg in the wrong order on big-endian hosts.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20201028191712.4910-2-peter.maydell@linaro.org
We can use proper widening loads to extend 32-bit inputs,
and skip the "widenfn" step.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201030022618.785675-12-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In both cases, we can sink the write-back and perform
the accumulate into the normal destination temps.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201030022618.785675-11-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The only uses of this function are for loading VFP
double-precision values, and nothing to do with NEON.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201030022618.785675-10-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Replace all uses of neon_load/store_reg64 within translate-neon.c.inc.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201030022618.785675-9-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The only uses of this function are for loading VFP
single-precision values, and nothing to do with NEON.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201030022618.785675-8-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We can then use this to improve VMOV (scalar to gp) and
VMOV (gp to scalar) so that we simply perform the memory
operation that we wanted, rather than inserting or
extracting from a 32-bit quantity.
These were the last uses of neon_load/store_reg, so remove them.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201030022618.785675-7-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Model these off the aa64 read/write_vec_element functions.
Use it within translate-neon.c.inc. The new functions do
not allocate or free temps, so this rearranges the calling
code a bit.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201030022618.785675-6-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This seems a bit more readable than using offsetof CPU_DoubleU.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201030022618.785675-5-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These are the only users of neon_reg_offset, so remove that.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201030022618.785675-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This will shortly have users outside of translate-neon.c.inc.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201030022618.785675-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This function makes it clear that we're talking about the whole
register, and not the 32-bit piece at index 0. This fixes a bug
when running on a big-endian host.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201030022618.785675-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Here's the next pull request for ppc and spapr related patches, which
should be the last things for soft freeze. Includes:
* Numerous error handling cleanups from Greg Kurz
* Cleanups to cpu realization and hotplug handling from Greg Kurz
* A handful of other small fixes and cleanups
This does include a change to pc_dimm_plug() that isn't in my normal
areas of concern. That's there as a a prerequisite for ppc specific
changes, and has an ack from Igor.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAl+YKwEACgkQbDjKyiDZ
s5LNmg/+J/nKfe494WPSDhOgpQNz4NJD1GZUOsADwuvDPQWmMdl4dY3MKd+ipN1X
HYmor7t+A5cvtm5zS0sHEZR/lg9qz3Oqc1nWpR+IJ7JmZMlmPbCKc6Iah4R1rQgA
N17C2LSdBXGN84cmi3B0IbMPxpogFM7SqiSG7T4QIy7nZ/gm/aYwejVs+lLA7vjl
hZmRnX76GE8fU0SzzmvIKTYwIqPbaZELP4LDmzKczdsWKvwOfeyUEp2loZ+xtyzc
s1ecJTaTRcgaiB5Ylu10Bv5/L07P4YxV2lMhd8WJqq6Ki/fdUIeVyP6lnZyTQVkj
YxtfrFsObVjNkHl8JtXG07QoQkGvkJLXGNyoPoCQy8ZEahYjjwP88UltAhENCllY
uDtR6fLj0YnpHu9gW1onGpkxjI93QaiWA0ePoF/z1gQ8E6G3vwSmI4h5FC4UHNv5
NNtHIJNgFj1icf8C8lyh5DBNqYNt8FFTS2iAmtl7eDbWcaDj1+fZWXQHqJS1CUI1
P8d2TNJXEyJ7fxfzk56LovdwvLs4/fLb4Lu5loi5CC4JPaYiwC7dEbhRPsxq8F++
l9gwfnnKo1WYkx8Y3PZEpXcoypCOGQToVAi519lx8YfRYeFTIXrR8vqOAfpeQpXY
9tktUFMgFZDoJl/Jc81enMjWy+mnmcIPfFR12vZ2LIR3X0tHE/g=
=igPS
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-5.2-20201028' into staging
ppc patch queue 2020-10-28
Here's the next pull request for ppc and spapr related patches, which
should be the last things for soft freeze. Includes:
* Numerous error handling cleanups from Greg Kurz
* Cleanups to cpu realization and hotplug handling from Greg Kurz
* A handful of other small fixes and cleanups
This does include a change to pc_dimm_plug() that isn't in my normal
areas of concern. That's there as a a prerequisite for ppc specific
changes, and has an ack from Igor.
# gpg: Signature made Tue 27 Oct 2020 14:13:21 GMT
# gpg: using RSA key 75F46586AE61A66CC44E87DC6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>" [full]
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>" [full]
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>" [full]
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>" [unknown]
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-5.2-20201028:
ppc/: fix some comment spelling errors
spapr: Improve spapr_reallocate_hpt() error reporting
target/ppc: Fix kvmppc_load_htab_chunk() error reporting
spapr: Use error_append_hint() in spapr_reallocate_hpt()
spapr: Simplify error handling in spapr_memory_plug()
spapr: Pass &error_abort when getting some PC DIMM properties
spapr: Use appropriate getter for PC_DIMM_SLOT_PROP
spapr: Use appropriate getter for PC_DIMM_ADDR_PROP
pc-dimm: Drop @errp argument of pc_dimm_plug()
spapr: Simplify spapr_cpu_core_realize() and spapr_cpu_core_unrealize()
spapr: Make spapr_cpu_core_unrealize() idempotent
spapr: Drop spapr_delete_vcpu() unused argument
spapr: Unrealize vCPUs with qdev_unrealize()
spapr: Fix leak of CPU machine specific data
spapr: Move spapr_create_nvdimm_dr_connectors() to core machine code
hw/net: move allocation to the heap due to very large stack frame
ppc/spapr: re-assert IRQs during event-scan if there are pending
spapr: Clarify why DR connectors aren't user creatable
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
I found that there are many spelling errors in the comments of qemu/target/ppc.
I used spellcheck to check the spelling errors and found some errors in the folder.
Signed-off-by: zhaolichang <zhaolichang@huawei.com>
Reviewed-by: David Edmondson <david.edmondson@oracle.com>
Message-Id: <20201009064449.2336-3-zhaolichang@huawei.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
If kvmppc_load_htab_chunk() fails, its return value is propagated up
to vmstate_load(). It should thus be a negative errno, not -1 (which
maps to EPERM and would lure the user into thinking that the problem
is necessarily related to a lack of privilege).
Return the error reported by KVM or ENOSPC in case of short write.
While here, propagate the error message through an @errp argument
and have the caller to print it with error_report_err() instead
of relying on fprintf().
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <160371604713.305923.5264900354159029580.stgit@bahia.lan>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Since we introduced CPU hot-unplug in sPAPR, we don't unrealize the
vCPU objects explicitly. Instead, we let QOM handle that for us under
object_property_del_all() when the CPU core object is finalized. The
only thing we do is calling cpu_remove_sync() to tear the vCPU thread
down.
This happens to work but it is ugly because:
- we call qdev_realize() but the corresponding qdev_unrealize() is
buried deep in the QOM code
- we call cpu_remove_sync() to undo qemu_init_vcpu() called by
ppc_cpu_realize() in target/ppc/translate_init.c.inc
- the CPU init and teardown paths aren't really symmetrical
The latter didn't bite us so far but a future patch that greatly
simplifies the CPU core realize path needs it to avoid a crash
in QOM.
For all these reasons, have ppc_cpu_unrealize() to undo the changes
of ppc_cpu_realize() by calling cpu_remove_sync() at the right place,
and have the sPAPR CPU core code to call qdev_unrealize().
This requires to add a missing stub because translate_init.c.inc is
also compiled for user mode.
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <160279671236.1808373.14732005038172874990.stgit@bahia.lan>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Transform the prot bit to a qemu internal page bit, and save
it in the page tables.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201021173749.111103-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There is no "version 2" of the "Lesser" General Public License.
It is either "GPL version 2.0" or "Lesser GPL version 2.1".
This patch replaces all occurrences of "Lesser GPL version 2" with
"Lesser GPL version 2.1" in comment section.
Signed-off-by: Chetan Pant <chetan4windows@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20201023123840.19988-1-chetan4windows@gmail.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
There are many spelling errors in the comments of target/rx.
Use spellcheck to check the spelling errors, then fix them.
Signed-off-by: zhaolichang <zhaolichang@huawei.com>
Reviewed-by: David Edmondson <david.edmondson@oracle.com>
Reviewed-by: Philippe Mathieu-Daude<f4bug@amsat.org>
Message-Id: <20201009064449.2336-5-zhaolichang@huawei.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
There are many spelling errors in the comments of target/sh4.
Use spellcheck to check the spelling errors, then fix them.
Signed-off-by: zhaolichang <zhaolichang@huawei.com>
Reviewed-by: David Edmondson <david.edmondson@oracle.com>
Reviewed-by: Philippe Mathieu-Daude<f4bug@amsat.org>
Message-Id: <20201009064449.2336-10-zhaolichang@huawei.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Linux userspace always sees coprocessors as enabled. CPENABLE register
and coprocessor exceptions are used internally by the kernel to manage
lazy coprocessor context switch. None of it is needed for linux-user.
Always enable all coprocessors for user emulation.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200829104758.22337-1-jcmvbkbc@gmail.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
VS-stage translation at get_physical_address needs to translate pte
address by G-stage translation. But the G-stage translation error
can not be distinguished from VS-stage translation error in
riscv_cpu_tlb_fill. On migration, destination needs to rebuild pte,
and this G-stage translation error must be handled by HS-mode. So
introduce TRANSLATE_STAGE2_FAIL so that riscv_cpu_tlb_fill could
distinguish and raise it to HS-mode.
Signed-off-by: Yifei Jiang <jiangyifei@huawei.com>
Signed-off-by: Yipeng Yin <yinyipeng1@huawei.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20201014101728.848-1-jiangyifei@huawei.com
[ Change by AF:
- Clarify the fault_pte_addr shift
]
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The HLVX.WU instruction is supposed to read a machine word,
but prior to this change it read a byte instead.
Fixes: 8c5362acb5 ("target/riscv: Allow generating hlv/hlvx/hsv instructions")
Signed-off-by: Georg Kotheimer <georg.kotheimer@kernkonzept.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20201013172223.443645-1-georg.kotheimer@kernkonzept.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The hstatus.GVA bit was not set if the faulting guest virtual address
was zero.
Signed-off-by: Georg Kotheimer <georg.kotheimer@kernkonzept.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20201013173054.451135-1-georg.kotheimer@kernkonzept.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
When trapping from virt into HS mode, hstatus.SPVP was set to
the value of sstatus.SPP, as according to the specification both
flags should be set to the same value.
However, the assignment of SPVP takes place before SPP itself is
updated, which results in SPVP having an outdated value.
Signed-off-by: Georg Kotheimer <georg.kotheimer@kernkonzept.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20201013151054.396481-1-georg.kotheimer@kernkonzept.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Currently we log interrupts and exceptions using the trace backend in
riscv_cpu_do_interrupt(). We also log exceptions using the interrupt log
mask (-d int) in riscv_raise_exception().
This patch converts riscv_cpu_do_interrupt() to log both interrupts and
exceptions with the interrupt log mask, so that both are printed when a
user runs QEMU with -d int.
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 29a8c766c7c4748d0f2711c3a0abb81208138c5e.1601652179.git.alistair.francis@wdc.com
Diag318 fencing needs to be determined on the current VM PV state and
not on the state that the VM has when we create the CPU model.
Fixes: fabdada935 ("s390: guest support for diagnose 0x318")
Reported-by: Marc Hartmayer <mhartmay@linux.ibm.com>
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Tested-by: Marc Hartmayer <mhartmay@linux.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Collin Walling <walling@linux.ibm.com>
Acked-by: David Hildenbrand <david@redhat.com>
Message-Id: <20201022103135.126033-3-frankja@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
If the M-profile low-overhead-branch extension is implemented, FPSCR
bits [18:16] are a new field LTPSIZE. If MVE is not implemented
(currently always true for us) then this field always reads as 4 and
ignores writes.
These bits used to be the vector-length field for the old
short-vector extension, so we need to take care that they are not
misinterpreted as setting vec_len. We do this with a rearrangement
of the vfp_set_fpscr() code that deals with vec_len, vec_stride
and also the QC bit; this obviates the need for the M-profile
only masking step that we used to have at the start of the function.
We provide a new field in CPUState for LTPSIZE, even though this
will always be 4, in preparation for MVE, so we don't have to
come back later and split it out of the vfp.xregs[FPSCR] value.
(This state struct field will be saved and restored as part of
the FPSCR value via the vmstate_fpscr in machine.c.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201019151301.2046-11-peter.maydell@linaro.org
M-profile CPUs with half-precision floating point support should
be able to write to FPSCR.FZ16, but an M-profile specific masking
of the value at the top of vfp_set_fpscr() currently prevents that.
This is not yet an active bug because we have no M-profile
FP16 CPUs, but needs to be fixed before we can add any.
The bits that the masking is effectively preventing from being
set are the A-profile only short-vector Len and Stride fields,
plus the Neon QC bit. Rearrange the order of the function so
that those fields are handled earlier and only under a suitable
guard; this allows us to drop the M-profile specific masking,
making FZ16 writeable.
This change also makes the QC bit correctly RAZ/WI for older
no-Neon A-profile cores.
This refactoring also paves the way for the low-overhead-branch
LTPSIZE field, which uses some of the bits that are used for
A-profile Stride and Len.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201019151301.2046-10-peter.maydell@linaro.org
In arm_cpu_realizefn(), if the CPU has VFP or Neon disabled then we
squash the ID register fields so that we don't advertise it to the
guest. This code was written for A-profile and needs some tweaks to
work correctly on M-profile:
* A-profile only fields should not be zeroed on M-profile:
- MVFR0.FPSHVEC,FPTRAP
- MVFR1.SIMDLS,SIMDINT,SIMDSP,SIMDHP
- MVFR2.SIMDMISC
* M-profile only fields should be zeroed on M-profile:
- MVFR1.FP16
In particular, because MVFR1.SIMDHP on A-profile is the same field as
MVFR1.FP16 on M-profile this code was incorrectly disabling FP16
support on an M-profile CPU (where has_neon is always false). This
isn't a visible bug yet because we don't have any M-profile CPUs with
FP16 support, but the change is necessary before we introduce any.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20201019151301.2046-9-peter.maydell@linaro.org
v8.1M's "low-overhead-loop" extension has three instructions
for looping:
* DLS (start of a do-loop)
* WLS (start of a while-loop)
* LE (end of a loop)
The loop-start instructions are both simple operations to start a
loop whose iteration count (if any) is in LR. The loop-end
instruction handles "decrement iteration count and jump back to loop
start"; it also caches the information about the branch back to the
start of the loop to improve performance of the branch on subsequent
iterations.
As with the branch-future instructions, the architecture permits an
implementation to discard the LO_BRANCH_INFO cache at any time, and
QEMU takes the IMPDEF option to never set it in the first place
(equivalent to discarding it immediately), because for us a "real"
implementation would be unnecessary complexity.
(This implementation only provides the simple looping constructs; the
vector extension MVE (Helium) adds some extra variants to handle
looping across vectors. We'll add those later when we implement
MVE.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201019151301.2046-8-peter.maydell@linaro.org
v8.1M implements a new 'branch future' feature, which is a
set of instructions that request the CPU to perform a branch
"in the future", when it reaches a particular execution address.
In hardware, the expected implementation is that the information
about the branch location and destination is cached and then
acted upon when execution reaches the specified address.
However the architecture permits an implementation to discard
this cached information at any point, and so guest code must
always include a normal branch insn at the branch point as
a fallback. In particular, an implementation is specifically
permitted to treat all BF insns as NOPs (which is equivalent
to discarding the cached information immediately).
For QEMU, implementing this caching of branch information
would be complicated and would not improve the speed of
execution at all, so we make the IMPDEF choice to implement
all BF insns as NOPs.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20201019151301.2046-7-peter.maydell@linaro.org
The BLX immediate insn in the Thumb encoding always performs
a switch from Thumb to Arm state. This would be totally useless
in M-profile which has no Arm decoder, and so the instruction
does not exist at all there. Make the encoding UNDEF for M-profile.
(This part of the encoding space is used for the branch-future
and low-overhead-loop insns in v8.1M.)
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20201019151301.2046-6-peter.maydell@linaro.org
The t32 decode has a group which represents a set of insns
which overlap with B_cond_thumb because they have [25:23]=111
(which is an invalid condition code field for the branch insn).
This group is currently defined using the {} overlap-OK syntax,
but it is almost entirely non-overlapping patterns. Switch
it over to use a non-overlapping group.
For this to be valid syntactically, CPS must move into the same
overlapping-group as the hint insns (CPS vs hints was the
only actual use of the overlap facility for the group).
The non-overlapping subgroup for CLREX/DSB/DMB/ISB/SB is no longer
necessary and so we can remove it (promoting those insns to
be members of the parent group).
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20201019151301.2046-5-peter.maydell@linaro.org
From v8.1M, disabled-coprocessor handling changes slightly:
* coprocessors 8, 9, 14 and 15 are also governed by the
cp10 enable bit, like cp11
* an extra range of instruction patterns is considered
to be inside the coprocessor space
We previously marked these up with TODO comments; implement the
correct behaviour.
Unfortunately there is no ID register field which indicates this
behaviour. We could in theory test an unrelated ID register which
indicates guaranteed-to-be-in-v8.1M behaviour like ID_ISAR0.CmpBranch
>= 3 (low-overhead-loops), but it seems better to simply define a new
ARM_FEATURE_V8_1M feature flag and use it for this and other
new-in-v8.1M behaviour that isn't identifiable from the ID registers.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201019151301.2046-3-peter.maydell@linaro.org
Unlike many other bits in HCR_EL2, the description for this
bit does not contain the phrase "if ... this field behaves
as 0 for all purposes other than", so do not squash the bit
in arm_hcr_el2_eff.
Instead, replicate the E2H+TGE test in the two places that
require it.
Reported-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Message-id: 20201008162155.161886-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The reporting in AArch64.TagCheckFail only depends on PSTATE.EL,
and not the AccType of the operation. There are two guest
visible problems that affect LDTR and STTR because of this:
(1) Selecting TCF0 vs TCF1 to decide on reporting,
(2) Report "data abort same el" not "data abort lower el".
Reported-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Message-id: 20201008162155.161886-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We already have the full ARMMMUIdx as computed from the
function parameter.
For the purpose of regime_has_2_ranges, we can ignore any
difference between AccType_Normal and AccType_Unpriv, which
would be the only difference between the passed mmu_idx
and arm_mmu_idx_el.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Message-id: 20201008162155.161886-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When TBI is enabled in a given regime, 56 bits of the address
are significant and we need to clear out any other matching
virtual addresses with differing tags.
The other uses of tlb_flush_page (without mmuidx) in this file
are only used by aarch32 mode.
Fixes: 38d931687f
Reported-by: Jordan Frank <jordanfrank@fb.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20201016210754.818257-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For AArch32, unlike the VCVT of integer to float, which honours the
rounding mode specified by the FPSCR, VCVT of fixed-point to float is
always round-to-nearest. (AArch64 fixed-point-to-float conversions
always honour the FPCR rounding mode.)
Implement this by providing _round_to_nearest versions of the
relevant helpers which set the rounding mode temporarily when making
the call to the underlying softfloat function.
We only need to change the VFP VCVT instructions, because the
standard- FPSCR value used by the Neon VCVT is always set to
round-to-nearest, so we don't need to do the extra work of saving
and restoring the rounding mode.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201013103532.13391-1-peter.maydell@linaro.org
The SMLAD instruction is supposed to:
* signed multiply Rn[15:0] * Rm[15:0]
* signed multiply Rn[31:16] * Rm[31:16]
* perform a signed addition of the products and Ra
* set Rd to the low 32 bits of the theoretical
infinite-precision result
* set the Q flag if the sign-extension of Rd
would differ from the infinite-precision result
(ie on overflow)
Our current implementation doesn't quite do this, though: it performs
an addition of the products setting Q on overflow, and then it adds
Ra, again possibly setting Q. This sometimes incorrectly sets Q when
the architecturally mandated only-check-for-overflow-once algorithm
does not. For instance:
r1 = 0x80008000; r2 = 0x80008000; r3 = 0xffffffff
smlad r0, r1, r2, r3
This is (-32768 * -32768) + (-32768 * -32768) - 1
The products are both 0x4000_0000, so when added together as 32-bit
signed numbers they overflow (and QEMU sets Q), but because the
addition of Ra == -1 brings the total back down to 0x7fff_ffff
there is no overflow for the complete operation and setting Q is
incorrect.
Fix this edge case by resorting to 64-bit arithmetic for the
case where we need to add three values together.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20201009144712.11187-1-peter.maydell@linaro.org
. Fix some comment spelling errors
. Demacro some TCG helpers
. Add loongson-ext lswc2/lsdc2 group of instructions
. Log unimplemented cache opcode
. Increase number of TLB entries on the 34Kf core
. Allow the CPU to use dynamic frequencies
. Calculate the CP0 timer period using the CPU frequency
. Set CPU frequency for each machine
. Fix Malta FPGA I/O region size
. Allow running qtests when ROM is missing
. Add record/replay acceptance tests
. Update MIPS CPU documentation
. MAINTAINERS updates
CI jobs results:
https://gitlab.com/philmd/qemu/-/pipelines/203931842https://travis-ci.org/github/philmd/qemu/builds/736491461https://cirrus-ci.com/build/6272264062631936https://app.shippable.com/github/philmd/qemu/runs/886/summary/console
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAl+K+NkACgkQ4+MsLN6t
wN4oPRAArZ5v0fjGrylt9g4xCAygLoSMkH3sZxltB77UVN/dCawSTLK2seKKO5g/
UtGt/j4/OAt8Ms+nF8FT+UZbknkgq+h7coOorHvz6gDEAx9UIg6/S2TRZJEx28+l
LbzkqdxvNSoHRrQDpGo43xoaxjzCxSTSOKpPfje6p2YDxWjkxdr/ahcsbKHSKc+x
uGdVdEAlLiAs/fBhkaJD3yy1VfqJKu8V5JJo1g4gSQOD1worRbZ4Us9QfuYr79Q7
Kce1Z1MQSf/TceZuDubhzZBep5lF1uW4lTywcaDby0LvGNK4K+RnH+i+t7CNhtKs
LH5j6iFQY1ecjb1Vh0IgKNAFaM2sTtO7A6fbBSOkVTO60wEp7i9fpbI5TRIjv7z/
EBkzP3n00hhbFFDci6Lnh/Ko0Xy0ODe3Um5l410sTnJe9+LK0HR5V6WH8PD/wKV2
nnKzSgb1U51KS6+FzLGLbQzDEvCgRKAJ9mwiQ+dlRfFHj+rEM6a9rlQmtsADBhKi
sEx62BKe6mM/+qQL9AOwZ5xBmFAn6wquuLYoA2Bwfg0wPIiAiFTwrz/eVSm9qYsw
O9Fer+1IMmd06T1REUtSDAh8+D2ekknKmFA3AG0818WvluD0Qm3KZp8uLLHJ/XkO
jiRtmeW+hApeh8hP4E0bzmrfJPKseBCYYP1By7XavIOoCxlqhew=
=BOxw
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/philmd-gitlab/tags/mips-next-20201017' into staging
MIPS patches queue
. Fix some comment spelling errors
. Demacro some TCG helpers
. Add loongson-ext lswc2/lsdc2 group of instructions
. Log unimplemented cache opcode
. Increase number of TLB entries on the 34Kf core
. Allow the CPU to use dynamic frequencies
. Calculate the CP0 timer period using the CPU frequency
. Set CPU frequency for each machine
. Fix Malta FPGA I/O region size
. Allow running qtests when ROM is missing
. Add record/replay acceptance tests
. Update MIPS CPU documentation
. MAINTAINERS updates
CI jobs results:
https://gitlab.com/philmd/qemu/-/pipelines/203931842https://travis-ci.org/github/philmd/qemu/builds/736491461https://cirrus-ci.com/build/6272264062631936https://app.shippable.com/github/philmd/qemu/runs/886/summary/console
# gpg: Signature made Sat 17 Oct 2020 14:59:53 BST
# gpg: using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE
# gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [full]
# Primary key fingerprint: FAAB E75E 1291 7221 DCFD 6BB2 E3E3 2C2C DEAD C0DE
* remotes/philmd-gitlab/tags/mips-next-20201017: (44 commits)
target/mips: Increase number of TLB entries on the 34Kf core (16 -> 64)
MAINTAINERS: Remove duplicated Malta test entries
MAINTAINERS: Downgrade MIPS Boston to 'Odd Fixes', fix Paul Burton mail
MAINTAINERS: Put myself forward for MIPS target
MAINTAINERS: Remove myself
docs/system: Update MIPS CPU documentation
tests/acceptance: Add MIPS record/replay tests
hw/mips: Remove exit(1) in case of missing ROM
hw/mips: Rename TYPE_MIPS_BOSTON to TYPE_BOSTON
hw/mips: Simplify code using ROUND_UP(INITRD_PAGE_SIZE)
hw/mips: Simplify loading 64-bit ELF kernels
hw/mips/malta: Use clearer qdev style
hw/mips/malta: Move gt64120 related code together
hw/mips/malta: Fix FPGA I/O region size
target/mips/cpu: Display warning when CPU is used without input clock
hw/mips/cps: Do not allow use without input clock
hw/mips/malta: Set CPU frequency to 320 MHz
hw/mips/boston: Set CPU frequency to 1 GHz
hw/mips/cps: Expose input clock and connect it to CPU cores
hw/mips/jazz: Correct CPU frequencies
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
during my split of cpus.c, code line
"current_cpu = cpu"
was removed by mistake, causing hax to break.
This commit fixes the situation restoring it.
Reported-by: Volker Rümelin <vr_qemu@t-online.de>
Fixes: e92558e4bf
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Message-Id: <20201016080032.13914-1-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Per "MIPS32 34K Processor Core Family Software User's Manual,
Revision 01.13" page 8 in "Joint TLB (JTLB)" section:
"The JTLB is a fully associative TLB cache containing 16, 32,
or 64-dual-entries mapping up to 128 virtual pages to their
corresponding physical addresses."
There is no particular reason to restrict the 34Kf core model to
16 TLB entries, so raise its config to 64.
This is helpful for other projects, in particular the Yocto Project:
Yocto Project uses qemu-system-mips 34Kf cpu model, to run 32bit
MIPS CI loop. It was observed that in this case CI test execution
time was almost twice longer than 64bit MIPS variant that runs
under MIPS64R2-generic model. It was investigated and concluded
that the difference in number of TLBs 16 in 34Kf case vs 64 in
MIPS64R2-generic is responsible for most of CI real time execution
difference. Because with 16 TLBs linux user-land trashes TLB more
and it needs to execute more instructions in TLB refill handler
calls, as result it runs much longer.
(https://lists.gnu.org/archive/html/qemu-devel/2020-10/msg03428.html)
Buglink: https://bugzilla.yoctoproject.org/show_bug.cgi?id=13992
Reported-by: Victor Kamensky <kamensky@cisco.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201016133317.553068-1-f4bug@amsat.org>
All our QOM users provides an input clock. In order to avoid
avoid future machines added without clock, display a warning.
User-mode emulation use the CP0 timer with the RDHWR instruction
(see commit cdfcad7883) so keep using the fixed 200 MHz clock
without diplaying any warning. Only display it in system-mode
emulation.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20201012095804.3335117-22-f4bug@amsat.org>
Introduce an helper to create a MIPS CPU and connect it to
a reference clock. This helper is not MIPS specific, but so
far only MIPS CPUs need it.
Suggested-by: Huacai Chen <zltjiangshi@gmail.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20201012095804.3335117-13-f4bug@amsat.org>
Use the Clock API and let the CPU object have an input clock.
If no clock is connected, keep using the default frequency of
200 MHz used since the introduction of the 'r4k' machine in
commit 6af0bf9c7c.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20201012095804.3335117-12-f4bug@amsat.org>
Since not all CPU implementations use a cores use a CP0 timer
at half the frequency of the CPU, make this variable a property.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20201012095804.3335117-11-f4bug@amsat.org>
The CP0 timer period is a function of the CPU frequency.
Start using the default values, which will be replaced by
properties in the next commits.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Message-Id: <20201012095804.3335117-10-f4bug@amsat.org>
Currently the CP0 timer period is fixed at 10 ns, corresponding
to a fixed CPU frequency of 200 MHz (using half the speed of the
CPU).
In few commits we will be able to use a different CPU frequency.
In preparation, move the cp0_count_ns variable to CPUMIPSState
so we can modify it.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Message-Id: <20201012095804.3335117-9-f4bug@amsat.org>
TIMER_PERIOD value of '10 ns' can be explained looking at
commit 6af0bf9c7c3doc, where the CPU frequency is 200 MHz
and CP0 default count rate is half the frequency of the
CPU. Document that.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20201012095804.3335117-8-f4bug@amsat.org>
Name variables holding nanoseconds with the '_ns' suffix.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
Message-Id: <20201012095804.3335117-7-f4bug@amsat.org>
The get_random() helper uses the CP0_Wired register, which is
unrelated to the CP0_Count register used as timer.
Commit e16fe40c87 ("Move the MIPS CPU timer in a separate file")
incorrectly moved this get_random() helper with timer specific
code. Move it back to generic CP0 helpers.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
Reviewed-by: Luc Michel <luc@lmichel.fr>
Message-Id: <20201012095804.3335117-6-f4bug@amsat.org>
In case the guest uses a cache opcode we are not expecting,
log it to give us a chance to notice it, in case we should
actually do something.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Message-Id: <20200813181527.22551-4-f4bug@amsat.org>
QEMU does not model caches, so there is not much to do with the
Invalidate/Writeback opcodes. Make it explicit adding a comment.
Suggested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Message-Id: <20200813181527.22551-3-f4bug@amsat.org>
The cache operation is encoded in bits [20:18] of the instruction.
The 'op' argument of helper_cache() contains the bits [20:16].
Extract the 3 bits and parse them using a switch case. This allow
us to handle multiple cache types (the cache type is encoded in
bits [17:16]).
Previously the if() block was only checking the D-Cache (Primary
Data or Unified Primary). Now we also handle the I-Cache (Primary
Instruction), S-Cache (Secondary) and T-Cache (Terciary).
Reported-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Message-Id: <20200813181527.22551-2-f4bug@amsat.org>
LDC2/SDC2 opcodes have been rewritten as "load & store with offset"
group of instructions by loongson-ext ASE.
This patch add implementation of these instructions:
gslbx: load 1 bytes to GPR
gslhx: load 2 bytes to GPR
gslwx: load 4 bytes to GPR
gsldx: load 8 bytes to GPR
gslwxc1: load 4 bytes to FPR
gsldxc1: load 8 bytes to FPR
gssbx: store 1 bytes from GPR
gsshx: store 2 bytes from GPR
gsswx: store 4 bytes from GPR
gssdx: store 8 bytes from GPR
gsswxc1: store 4 bytes from FPR
gssdxc1: store 8 bytes from FPR
Details of Loongson-EXT is here:
https://github.com/FlyGoat/loongson-insn/blob/master/loongson-ext.md
Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Signed-off-by: Huacai Chen <chenhc@lemote.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <1602831120-3377-5-git-send-email-chenhc@lemote.com>
LWC2 & SWC2 have been rewritten by Loongson EXT vendor ASE
as "load/store quad word" and "shifted load/store" groups of
instructions.
This patch add implementation of these instructions:
gslwlc1: similar to lwl but RT is FPR instead of GPR
gslwrc1: similar to lwr but RT is FPR instead of GPR
gsldlc1: similar to ldl but RT is FPR instead of GPR
gsldrc1: similar to ldr but RT is FPR instead of GPR
gsswlc1: similar to swl but RT is FPR instead of GPR
gsswrc1: similar to swr but RT is FPR instead of GPR
gssdlc1: similar to sdl but RT is FPR instead of GPR
gssdrc1: similar to sdr but RT is FPR instead of GPR
Details of Loongson-EXT is here:
https://github.com/FlyGoat/loongson-insn/blob/master/loongson-ext.md
Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Signed-off-by: Huacai Chen <chenhc@lemote.com>
Message-Id: <1602831120-3377-4-git-send-email-chenhc@lemote.com>
[PMD: Reuse t1 on MIPS32, reintroduce t2/fp0]
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
LWC2 & SWC2 have been rewritten by Loongson EXT vendor ASE
as "load/store quad word" and "shifted load/store" groups of
instructions.
This patch add implementation of these instructions:
gslq: load 16 bytes to GPR
gssq: store 16 bytes from GPR
gslqc1: load 16 bytes to FPR
gssqc1: store 16 bytes from FPR
Details of Loongson-EXT is here:
https://github.com/FlyGoat/loongson-insn/blob/master/loongson-ext.md
Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Signed-off-by: Huacai Chen <chenhc@lemote.com>
Message-Id: <1602831120-3377-3-git-send-email-chenhc@lemote.com>
[PMD: Restrict t1 variable to TARGET_MIPS64, remove unused t2/fp0]
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Remove function definitions via macros to achieve better code clarity.
Signed-off-by: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <1602103041-32017-4-git-send-email-aleksandar.qemu.devel@gmail.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Remove function definitions via macros to achieve better code clarity.
Signed-off-by: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <1602103041-32017-3-git-send-email-aleksandar.qemu.devel@gmail.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Remove function definitions via macros to achieve better code clarity.
Signed-off-by: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <1602103041-32017-2-git-send-email-aleksandar.qemu.devel@gmail.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
There are many spelling errors in the comments in target/mips/.
Use spellcheck to check the spelling errors.
Signed-off-by: zhaolichang <zhaolichang@huawei.com>
Reviewed-by: David Edmondson <david.edmondson@oracle.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20201009064449.2336-7-zhaolichang@huawei.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>