The M-profile FPCCR.S bit indicates the security status of
the floating point context. In the pseudocode ExecuteFPCheck()
function it is unconditionally set to match the current
security state whenever a floating point instruction is
executed.
Implement this by adding a new TB flag which tracks whether
FPCCR.S is different from the current security state, so
that we only need to emit the code to update it in the
less-common case when it is not already set correctly.
Note that we will add the handling for the other work done
by ExecuteFPCheck() in later commits.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-19-peter.maydell@linaro.org
We are close to running out of TB flags for AArch32; we could
start using the cs_base word, but before we do that we can
economise on our usage by sharing the same bits for the VFP
VECSTRIDE field and the XScale XSCALE_CPAR field. This
works because no XScale CPU ever had VFP.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-18-peter.maydell@linaro.org
Move the NS TBFLAG down from bit 19 to bit 6, which has not
been used since commit c1e3781090 in 2015, when we
started passing the entire MMU index in the TB flags rather
than just a 'privilege level' bit.
This rearrangement is not strictly necessary, but means that
we can put M-profile-only bits next to each other rather
than scattered across the flag word.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-17-peter.maydell@linaro.org
Handle floating point registers in exception return.
This corresponds to pseudocode functions ValidateExceptionReturn(),
ExceptionReturn(), PopStack() and ConsumeExcStackFrame().
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-16-peter.maydell@linaro.org
The magic value pushed onto the callee stack as an integrity
check is different if floating point is present.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-15-peter.maydell@linaro.org
The TailChain() pseudocode specifies that a tail chaining
exception should sanitize the excReturn all-ones bits and
(if there is no FPU) the excReturn FType bits; we weren't
doing this.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-14-peter.maydell@linaro.org
For v8M floating point support, transitions from Secure
to Non-secure state via BLNS and BLXNS must clear the
CONTROL.SFPA bit. (This corresponds to the pseudocode
BranchToNS() function.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-13-peter.maydell@linaro.org
Implement the code which updates the FPCCR register on an
exception entry where we are going to use lazy FP stacking.
We have to defer to the NVIC to determine whether the
various exceptions are currently ready or not.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190416125744.27770-12-peter.maydell@linaro.org
Handle floating point registers in exception entry.
This corresponds to the FP-specific parts of the pseudocode
functions ActivateException() and PushStack().
We defer the code corresponding to UpdateFPCCR() to a later patch.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-11-peter.maydell@linaro.org
Currently the code in v7m_push_stack() which detects a violation
of the v8M stack limit simply returns early if it does so. This
is OK for the current integer-only code, but won't work for the
floating point handling we're about to add. We need to continue
executing the rest of the function so that we check for other
exceptions like not having permission to use the FPU and so
that we correctly set the FPCCR state if we are doing lazy
stacking. Refactor to avoid the early return.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-10-peter.maydell@linaro.org
The M-profile CONTROL register has two bits -- SFPA and FPCA --
which relate to floating-point support, and should be RES0 otherwise.
Handle them correctly in the MSR/MRS register access code.
Neither is banked between security states, so they are stored
in v7m.control[M_REG_S] regardless of current security state.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-9-peter.maydell@linaro.org
If the floating point extension is present, then the SG instruction
must clear the CONTROL_S.SFPA bit. Implement this.
(On a no-FPU system the bit will always be zero, so we don't need
to make the clearing of the bit conditional on ARM_FEATURE_VFP.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-8-peter.maydell@linaro.org
Correct the decode of the M-profile "coprocessor and
floating-point instructions" space:
* op0 == 0b11 is always unallocated
* if the CPU has an FPU then all insns with op1 == 0b101
are floating point and go to disas_vfp_insn()
For the moment we leave VLLDM and VLSTM as NOPs; in
a later commit we will fill in the proper implementation
for the case where an FPU is present.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-7-peter.maydell@linaro.org
Like AArch64, M-profile floating point has no FPEXC enable
bit to gate floating point; so always set the VFPEN TB flag.
M-profile also has CPACR and NSACR similar to A-profile;
they behave slightly differently:
* the CPACR is banked between Secure and Non-Secure
* if the NSACR forces a trap then this is taken to
the Secure state, not the Non-Secure state
Honour the CPACR and NSACR settings. The NSACR handling
requires us to borrow the exception.target_el field
(usually meaningless for M profile) to distinguish the
NOCP UsageFault taken to Secure state from the more
usual fault taken to the current security state.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-6-peter.maydell@linaro.org
The only "system register" that M-profile floating point exposes
via the VMRS/VMRS instructions is FPSCR, and it does not have
the odd special case for rd==15. Add a check to ensure we only
expose FPSCR.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-5-peter.maydell@linaro.org
The M-profile floating point support has three associated config
registers: FPCAR, FPCCR and FPDSCR. It also makes the registers
CPACR and NSACR have behaviour other than reads-as-zero.
Add support for all of these as simple reads-as-written registers.
We will hook up actual functionality later.
The main complexity here is handling the FPCCR register, which
has a mix of banked and unbanked bits.
Note that we don't share storage with the A-profile
cpu->cp15.nsacr and cpu->cp15.cpacr_el1, though the behaviour
is quite similar, for two reasons:
* the M profile CPACR is banked between security states
* it preserves the invariant that M profile uses no state
inside the cp15 substruct
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-4-peter.maydell@linaro.org
Enforce that for M-profile various FPSCR bits which are RES0 there
but have defined meanings on A-profile are never settable. This
ensures that M-profile code can't enable the A-profile behaviour
(notably vector length/stride handling) by accident.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-2-peter.maydell@linaro.org
This patch adds support for libgloss semihosting to Nios II bare-metal
emulation. The specification for the protocol can be found in the
libgloss sources.
Signed-off-by: Sandra Loosemore <sandra@codesourcery.com>
Signed-off-by: Julian Brown <julian@codesourcery.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1554321185-2825-3-git-send-email-sandra@codesourcery.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Here's the first ppc target pull request for qemu-4.1. This has a
number of things that have accumulated while qemu-4.0 was frozen.
* A number of emulated MMU improvements from Ben Herrenschmidt
* Assorted cleanups fro Greg Kurz
* A large set of mostly mechanical cleanups from me to make target/ppc
much closer to compliant with the modern coding style
* Support for passthrough of NVIDIA GPUs using NVLink2
As well as some other assorted fixes.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAlzCnusACgkQbDjKyiDZ
s5LfhhAAuem5UBGKPKPj33c87HC+GGG+S4y89ic3ebyKplWulGgouHCa4Dnc7Y5m
9MfIEcljRDpuRJCEONo6yg9aaRb3cW2Go9TpTwxmF8o1suG/v5bIQIdiRbBuMa2t
yhNujVg5kkWSU1G4mCZjL9FS2ADPsxsKZVd73DPEqjlNJg981+2qtSnfR8SXhfnk
dSSKxyfC6Hq1+uhGkLI+xtft+BCTWOstjz+efHpZ5l2mbiaMeh7zMKrIXXy/FtKA
ufIyxbZznMS5MAZk7t90YldznfwOCqfh3di1kx8GTZ40LkBKbuI5LLHTG0sT75z5
LHwFuLkBgWmS8RyIRRh9opr7ifrayHx8bQFpW368Qu+PbPzUCcTVIrWUfPmaNR74
CkYJvhiYZfTwKtUeP7b2wUkHpZF4KINI4TKNaS4QAlm3DNbO67DFYkBrytpXsSzv
smEpe+sqlbY40olw9q4ESP80r+kGdEPLkRjfdj0R7qS4fsqAH1bjuSkNqlPaCTJQ
hNsoz2D+f56z0bBq4x8FRzDpqnBkdy4x6PlLxkJuAaV7WAtvq7n7tiMA3TRr/rIB
OYFP2xPNajjP8MfyOB94+S4WDltmsgXoM7HyyvrKp2JBpe7mFjpep5fMp5GUpweV
OOYrTsN1Nuu3kFpeimEc+IOyp1BWXnJF4vHhKTOqHeqZEs5Fgus=
=RpAK
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-4.1-20190426' into staging
ppc patch queue 2019-04-26
Here's the first ppc target pull request for qemu-4.1. This has a
number of things that have accumulated while qemu-4.0 was frozen.
* A number of emulated MMU improvements from Ben Herrenschmidt
* Assorted cleanups fro Greg Kurz
* A large set of mostly mechanical cleanups from me to make target/ppc
much closer to compliant with the modern coding style
* Support for passthrough of NVIDIA GPUs using NVLink2
As well as some other assorted fixes.
# gpg: Signature made Fri 26 Apr 2019 07:02:19 BST
# gpg: using RSA key 75F46586AE61A66CC44E87DC6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>" [full]
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>" [full]
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>" [full]
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>" [unknown]
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-4.1-20190426: (36 commits)
target/ppc: improve performance of large BAT invalidations
ppc/hash32: Rework R and C bit updates
ppc/hash64: Rework R and C bit updates
ppc/spapr: Use proper HPTE accessors for H_READ
target/ppc: Don't check UPRT in radix mode when in HV real mode
target/ppc/kvm: Convert DPRINTF to traces
target/ppc/trace-events: Fix trivial typo
spapr: Drop duplicate PCI swizzle code
spapr_pci: Get rid of duplicate code for node name creation
target/ppc: Style fixes for translate/spe-impl.inc.c
target/ppc: Style fixes for translate/vmx-impl.inc.c
target/ppc: Style fixes for translate/vsx-impl.inc.c
target/ppc: Style fixes for translate/fp-impl.inc.c
target/ppc: Style fixes for translate.c
target/ppc: Style fixes for translate_init.inc.c
target/ppc: Style fixes for monitor.c
target/ppc: Style fixes for mmu_helper.c
target/ppc: Style fixes for mmu-hash64.[ch]
target/ppc: Style fixes for mmu-hash32.[ch]
target/ppc: Style fixes for misc_helper.c
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Performing a complete flush is ~ 100 times faster than flushing
256MiB of 4KiB pages. Set a limit of 1024 pages and perform a complete
flush afterwards.
This patch significantly speeds up AIX 5.1 and NetBSD-ofppc.
Signed-off-by: Artyom Tarasenko <atar4qemu@gmail.com>
Message-Id: <1555103178-21894-4-git-send-email-atar4qemu@gmail.com>
Reviewed-by: Hervé Poussineau <hpoussin@reactos.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
With MT-TCG, we are now running translation in a racy way, thus
we need to mimic hardware when it comes to updating the R and
C bits, by doing byte stores.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190411080004.8690-5-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
With MT-TCG, we are now running translation in a racy way, thus
we need to mimic hardware when it comes to updating the R and
C bits, by doing byte stores.
The current "store_hpte" abstraction is ill suited for this, we
replace it with two separate callbacks for setting R and C.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190411080004.8690-4-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
It appears that during kexec, we run for a while in hypervisor
real mode with LPCR:HR set and LPCR:UPRT clear, which trips
the assertion in ppc_radix64_handle_mmu_fault().
First this shouldn't be an assertion, it's a guest error.
Then we shouldn't be checking these things in hypervisor real
mode (or in virtual hypervisor guest real mode which is similar)
as the real HW won't use those LPCR bits in those cases anyway,
so technically it's ok to have this discrepancy.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190411080004.8690-2-clg@kaod.org>
[dwg: Fix for 32-bit builds]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <155445152490.302073.17033451726459859333.stgit@bahia.lan>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <155445151931.302073.18436485925081597460.stgit@bahia.lan>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Add a new base CPU model called 'Dhyana' to model processors from Hygon
Dhyana(family 18h), which derived from AMD EPYC(family 17h).
The following features bits have been removed compare to AMD EPYC:
aes, pclmulqdq, sha_ni
The Hygon Dhyana support to KVM in Linux is already accepted upstream[1].
So add Hygon Dhyana support to Qemu is necessary to create Hygon's own
CPU model.
Reference:
[1] https://git.kernel.org/tip/fec98069fb72fb656304a3e52265e0c2fc9adf87
Signed-off-by: Pu Wen <puwen@hygon.cn>
Message-Id: <1555416373-28690-1-git-send-email-puwen@hygon.cn>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Rename qemu_getrampagesize() to qemu_minrampagesize(). While at it,
properly rename find_max_supported_pagesize() to
find_min_backend_pagesize().
s390x is actually interested into the maximum ram pagesize, so
introduce and use qemu_maxrampagesize().
Add a TODO, indicating that looking at any mapped memory backends is not
100% correct in some cases.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190417113143.5551-3-david@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Right now we configure the pagesize quite early, when initializing KVM.
This is long before system memory is actually allocated via
memory_region_allocate_system_memory(), and therefore memory backends
marked as mapped.
Instead, let's configure the maximum page size after initializing
memory in s390_memory_init(). cap_hpage_1m is still properly
configured before creating any CPUs, and therefore before configuring
the CPU model and eventually enabling CMMA.
This is not a fix but rather a preparation for the future, when initial
memory might reside on memory backends (not the case for s390x right now)
We will replace qemu_getrampagesize() soon by a function that will always
return the maximum page size (not the minimum page size, which only
works by pure luck so far, as there are no memory backends).
Acked-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190417113143.5551-2-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
In order to handle TB's that translate to too much code, we
need to place the control of the length of the translation
in the hands of the code gen master loop.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Commit dc99065b5f (v0.1.0) added dis-asm.h from binutils.
Commit 43d4145a98 (v0.1.5) inlined bfd.h into dis-asm.h to remove the
dependency on binutils.
Commit 76cad71136 (v1.4.0) moved dis-asm.h to include/disas/bfd.h.
The new name is confusing when you try to match against (pre GPLv3+)
binutils. Rename it back. Keep it in the same directory, of course.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20190417191805.28198-17-armbru@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
CPUClass method dump_statistics() takes an fprintf()-like callback and
a FILE * to pass to it. Most callers pass fprintf() and stderr.
log_cpu_state() passes fprintf() and qemu_log_file.
hmp_info_registers() passes monitor_fprintf() and the current monitor
cast to FILE *. monitor_fprintf() casts it right back, and is
otherwise identical to monitor_printf().
The callback gets passed around a lot, which is tiresome. The
type-punning around monitor_fprintf() is ugly.
Drop the callback, and call qemu_fprintf() instead. Also gets rid of
the type-punning, since qemu_fprintf() takes NULL instead of the
current monitor cast to FILE *.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190417191805.28198-15-armbru@redhat.com>
CPUClass method dump_statistics() takes an fprintf()-like callback and
a FILE * to pass to it.
Its only caller hmp_info_cpustats() (via cpu_dump_statistics()) passes
monitor_fprintf() and the current monitor cast to FILE *.
monitor_fprintf() casts it right back, and is otherwise identical to
monitor_printf(). The type-punning is ugly.
Drop the callback, and call qemu_printf() instead.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190417191805.28198-13-armbru@redhat.com>
x86_cpu_dump_local_apic_state() takes an fprintf()-like callback and a
FILE * to pass to it, and so do its helper functions.
Its only caller hmp_info_local_apic() passes monitor_fprintf() and the
current monitor cast to FILE *. monitor_fprintf() casts it right
back, and is otherwise identical to monitor_printf(). The
type-punning is ugly.
Drop the callback, and call qemu_printf() instead.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190417191805.28198-12-armbru@redhat.com>
The various dump_mmu() take an fprintf()-like callback and a FILE * to
pass to it, and so do their helper functions. Passing around callback
and argument is rather tiresome.
Most dump_mmu() are called only by the target's hmp_info_tlb(). These
all pass monitor_printf() cast to fprintf_function and the current
monitor cast to FILE *.
SPARC's dump_mmu() gets also called from target/sparc/ldst_helper.c a
few times #ifdef DEBUG_MMU. These calls pass fprintf() and stdout.
The type-punning is technically undefined behaviour, but works in
practice. Clean up: drop the callback, and call qemu_printf()
instead.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190417191805.28198-11-armbru@redhat.com>
The various TARGET_cpu_list() take an fprintf()-like callback and a
FILE * to pass to it. Their callers (vl.c's main() via list_cpus(),
bsd-user/main.c's main(), linux-user/main.c's main()) all pass
fprintf() and stdout. Thus, the flexibility provided by the (rather
tiresome) indirection isn't actually used.
Drop the callback, and call qemu_printf() instead.
Calling printf() would also work, but would make the code unsuitable
for monitor context without making it simpler.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20190417191805.28198-10-armbru@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
kvm_s390_mem_op() can fail in two ways: when !cap_mem_op, it returns
-ENOSYS, and when kvm_vcpu_ioctl() fails, it returns -errno set by
ioctl(). Its caller s390_cpu_virt_mem_rw() recovers from both
failures.
kvm_s390_mem_op() prints "KVM_S390_MEM_OP failed" with error_printf()
in the latter failure mode. Since this is obviously a warning, use
warn_report().
Perhaps the reporting should be left to the caller. It could warn on
failure other than -ENOSYS.
Cc: Thomas Huth <thuth@redhat.com>
Cc: qemu-s390x@nongnu.org
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Message-Id: <20190417190641.26814-9-armbru@redhat.com>
Fix a TCG crash due to attempting an atomic increment
operation without having set up the address first.
This is a similar case to that dealt with in commit
e84fcd7f66, and we fix it in the same way.
Fixes: https://bugs.launchpad.net/qemu/+bug/1807675
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20190328104750.25046-1-peter.maydell@linaro.org
I've been hitting several QEMU crashes while running a fedora29 ppc64le
guest under TCG. Each time, this would occur several minutes after the
guest reached login:
Fedora 29 (Twenty Nine)
Kernel 4.20.6-200.fc29.ppc64le on an ppc64le (hvc0)
Web console: https://localhost:9090/
localhost login:
tcg/tcg.c:3211: tcg fatal error
This happens because a bug crept up in the gen_stxsdx() helper when it
was converted to use VSR register accessors by commit 8b3b2d75c7
"target/ppc: introduce get_cpu_vsr{l,h}() and set_cpu_vsr{l,h}() helpers
for VSR register access".
The code creates a temporary, passes it directly to gen_qemu_st64_i64()
and then to set_cpu_vrsh()... which looks like this was mistakenly
coded as a load instead of a store.
Reverse the logic: read the VSR to the temporary first and then store
it to memory.
Fixes: 8b3b2d75c7
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <155371035249.2038502.12364252604337688538.stgit@bahia.lan>
Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <155359567174.1794128.3183997593369465355.stgit@bahia.lan>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We use PPC_SEGMENT_64B in various places to guard code that is specific
to 64-bit server processors compliant with arch 2.x. Consolidate the
logic in a helper macro with an explicit name.
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <155327783157.1283071.3747129891004927299.stgit@bahia.lan>
Tested-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Even if all ISAs up to v3 indeed mention:
If the "decrement and test CTR" option is specified (BO2=0), the
instruction form is invalid.
The UMs of all existing 64-bit server class processors say:
If BO[2] = 0, the contents of CTR (before any update) are used as the
target address and for the test of the contents of CTR to resolve the
branch. The contents of the CTR are then decremented and written back
to the CTR.
The linux kernel has spectre v2 mitigation code that relies on a
BO[2] = 0 variant of bcctr, which is now activated by default on
spapr, even with TCG. This causes linux guests to panic with
the default machine type under TCG.
Since any CPU model can provide its own behaviour for invalid forms,
we could possibly introduce a new instruction flag to handle this.
In practice, since the behaviour is shared by all 64-bit server
processors starting with 970 up to POWER9, let's reuse the
PPC_SEGMENT_64B flag. Caveat: this may have to be fixed later if
POWER10 introduces a different behaviour.
The existing behaviour of throwing a program interrupt is kept for
all other CPU models.
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <155327782604.1283071.10640596307206921951.stgit@bahia.lan>
Tested-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
base register is no rs1 not rs2 for fsw.
Signed-off-by: Kito Cheng <kito.cheng@gmail.com>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
If this is too late I'm OK with it being in rc2, but it fixes a concrete
regression and nobody has complained yet so I'd prefer it to be in rc1
if possible.
The fix is to zero-extend the inputs to DIVUW and REMUW, which was
exposed by the GCC test suite.
-----BEGIN PGP SIGNATURE-----
iQJHBAABCAAxFiEEAM520YNJYN/OiG3470yhUCzLq0EFAlyZvowTHHBhbG1lckBk
YWJiZWx0LmNvbQAKCRDvTKFQLMurQWSoD/0aEguPUC2iDtJY8Tw5ZMl3KIZgA5I1
TI0Ik/8SvhXNLv9TJzx4c44qfLJ3EWtii7W7hmvxBUKilgmykNY1CnThDT/vEXSk
jK4OBBFRLtBAKva6n7XxDaebJ7d3KLJm76Ff+d/B8qHy+bP+PAPWnpmH+9snxAqf
/MImgrz3YUeYT3pQjeJVbpJjCOAcnEMk6syOKPsEzppFaWnoFWMzto1eGSkpi7/w
28MzUV+1pb/MhlwpJf7NxlEDYbmx+vT/LP8dgT+IRlynk9HkaZ+Vpjm93o1rJlpo
Imm3rbW2OjtwrY5IyyUgoGgxmVG2Riwb+Y71giJ9XeXB35FUt2UFtOod/BdkznWp
dt61zzf1j/bD6QfJfN8iy8jR6uHxN/f+9beh4nCQivF09fSsf2NO6lGeNNSOVvdh
vQiHZgDygpsnw4dZwOd7sLZTeQPUt3gtQB67a3PUiHVLW6Dy0IhoaAColVlpvilD
xSB7FsmqKDobFmo7FLShIHgBcdq3irGOvCuGgHH82XMGMBX2PRpSg6VLjN4QWfAR
V1VujOs8icU0Np+0XowuOYCjE+vnvodgM3Rm4LhE41RHogWqBorE/lOCj74di5rG
gdCSbHeHMjbsai4MkSIJnzxafprfJBbvWwodUVv4bAJ89YdJRHN3PmrlSMAe7/ol
Xo2c2HkZ5t27QA==
=wgpv
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/palmer/tags/riscv-for-master-4.0-rc1' into staging
A Single RISC-V Patch for 4.0-rc1
If this is too late I'm OK with it being in rc2, but it fixes a concrete
regression and nobody has complained yet so I'd prefer it to be in rc1
if possible.
The fix is to zero-extend the inputs to DIVUW and REMUW, which was
exposed by the GCC test suite.
# gpg: Signature made Tue 26 Mar 2019 05:54:20 GMT
# gpg: using RSA key 00CE76D1834960DFCE886DF8EF4CA1502CCBAB41
# gpg: issuer "palmer@dabbelt.com"
# gpg: Good signature from "Palmer Dabbelt <palmer@dabbelt.com>" [unknown]
# gpg: aka "Palmer Dabbelt <palmer@sifive.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 00CE 76D1 8349 60DF CE88 6DF8 EF4C A150 2CCB AB41
* remotes/palmer/tags/riscv-for-master-4.0-rc1:
target/riscv: Zero extend the inputs of divuw and remuw
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These functions are not used outside helper.c
Signed-off-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190322162333.17159-4-drjones@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
cortex-a7 and cortex-a15 have pmus (PMUv2) and they advertise
them in ID_DFR0. Let's allow them to function. This also enables
the pmu cpu property to work with these cpu types, i.e. we can
now do '-cpu cortex-a15,pmu=off' to remove the pmu.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190322162333.17159-3-drjones@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Fix a QEMU NULL derefence that occurs when the guest attempts to
enable PMU counters with a non-v8 cpu model or a v8 cpu model
which has not configured a PMU.
Fixes: 4e7beb0cc0 ("target/arm: Add a timer to predict PMU counter overflow")
Signed-off-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190322162333.17159-2-drjones@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The second word has been loaded from the unincremented
address since the first commit.
Fixes: 44ac14b06f
Reported-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20190322234302.12770-1-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Don't announce that exit simcall has been invoked: this is just noise.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
We spell out sub/dir/ in sub/dir/trace-events' comments pointing to
source files. That's because when trace-events got split up, the
comments were moved verbatim.
Delete the sub/dir/ part from these comments. Gets rid of several
misspellings.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190314180929.27722-3-armbru@redhat.com
Message-Id: <20190314180929.27722-3-armbru@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
While running the GCC test suite against 4.0.0-rc0, Kito found a
regression introduced by the decodetree conversion that caused divuw and
remuw to sign-extend their inputs. The ISA manual says they are
supposed to be zero extended:
DIVW and DIVUW instructions are only valid for RV64, and divide the
lower 32 bits of rs1 by the lower 32 bits of rs2, treating them as
signed and unsigned integers respectively, placing the 32-bit
quotient in rd, sign-extended to 64 bits. REMW and REMUW
instructions are only valid for RV64, and provide the corresponding
signed and unsigned remainder operations respectively. Both REMW
and REMUW always sign-extend the 32-bit result to 64 bits, including
on a divide by zero.
Here's Kito's reduced test case from the GCC test suite
unsigned calc_mp(unsigned mod)
{
unsigned a,b,c;
c=-1;
a=c/mod;
b=0-a*mod;
if (b > mod) { a += 1; b-=mod; }
return b;
}
int main(int argc, char *argv[])
{
unsigned x = 1234;
unsigned y = calc_mp(x);
if ((sizeof (y) == 4 && y != 680)
|| (sizeof (y) == 2 && y != 134))
abort ();
exit (0);
}
I haven't done any other testing on this, but it does fix the test case.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
break_dependency incorrectly handles the case of dependency on an opcode
that references the same register multiple times. E.g. the following
instruction is translated incorrectly:
{ or a2, a3, a3 ; or a3, a2, a2 }
This happens because resource indices of both dependency graph nodes are
incremented, and a copy for the second instance of the same register in
the ending node is not done.
Only increment resource index of the ending node of the dependency.
Add test.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Currently, the Cascadelake-Server, Icelake-Client, and
Icelake-Server are always generating the following warning:
qemu-system-x86_64: warning: \
host doesn't support requested feature: CPUID.07H:ECX [bit 4]
This happens because OSPKE was never returned by
GET_SUPPORTED_CPUID or x86_cpu_get_supported_feature_word().
OSPKE is a runtime flag automatically set by the KVM module or by
TCG code, was always cleared by x86_cpu_filter_features(), and
was not supposed to appear on the CPU model table.
Remove the OSPKE flag from the CPU model table entries, to avoid
the bogus warning and avoid returning invalid feature data on
query-cpu-* QMP commands. As OSPKE was always cleared by
x86_cpu_filter_features(), this won't have any guest-visible
impact.
Include a test case that should detect the problem if we introduce
a similar bug again.
Fixes: c7a88b52f6 ("i386: Add new model of Cascadelake-Server")
Fixes: 8a11c62da9 ("i386: Add new CPU model Icelake-{Server,Client}")
Cc: Tao Xu <tao3.xu@intel.com>
Cc: Robert Hoo <robert.hu@linux.intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20190319200515.14999-1-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Now that kvm_arch_get_supported_cpuid() will only return
arch_capabilities if QEMU is able to initialize the MSR properly,
we know that the feature is safely migratable.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20190125220606.4864-3-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
KVM has two bugs in the handling of MSR_IA32_ARCH_CAPABILITIES:
1) Linux commit commit 1eaafe91a0df ("kvm: x86: IA32_ARCH_CAPABILITIES
is always supported") makes GET_SUPPORTED_CPUID return
arch_capabilities even if running on SVM. This makes "-cpu
host,migratable=off" incorrectly expose arch_capabilities on CPUID on
AMD hosts (where the MSR is not emulated by KVM).
2) KVM_GET_MSR_INDEX_LIST does not return MSR_IA32_ARCH_CAPABILITIES if
the MSR is not supported by the host CPU. This makes QEMU not
initialize the MSR properly at kvm_put_msrs() on those hosts.
Work around both bugs on the QEMU side, by checking if the MSR
was returned by KVM_GET_MSR_INDEX_LIST before returning the
feature flag on kvm_arch_get_supported_cpuid().
This has the unfortunate side effect of making arch_capabilities
unavailable on hosts without hardware support for the MSR until bug #2
is fixed on KVM, but I can't see another way to work around bug #1
without that side effect.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20190125220606.4864-2-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Alistair Francis <Alistair.Francis@wdc.com>
Signed-off-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Alistair Francis <Alistair.Francis@wdc.com>
Signed-off-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
If vectored interrupts are enabled (bits[1:0]
of mtvec/stvec == 1) then use the following
logic for trap entry address calculation:
pc = mtvec + cause * 4
In addition to adding support for vectored interrupts
this patch simplifies the interrupt delivery logic
by making sync/async cause decoding and encoding
steps distinct.
The cause code and the sign bit indicating sync/async
is split at the beginning of the function and fixed
cause is renamed to cause. The MSB setting for async
traps is delayed until setting mcause/scause to allow
redundant variables to be eliminated. Some variables
are renamed for conciseness and moved so that decls
are at the start of the block.
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Alistair Francis <Alistair.Francis@wdc.com>
Signed-off-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
This effectively changes riscv_cpu_update_mip
from edge to level. i.e. cpu_interrupt or
cpu_reset_interrupt are called regardless of
the current interrupt level.
Fixes WFI doesn't return when a IPI is issued:
- https://github.com/riscv/riscv-qemu/issues/132
To test:
1) Apply RISC-V Linux CPU hotplug patch:
- http://lists.infradead.org/pipermail/linux-riscv/2018-May/000603.html
2) Enable CONFIG_CPU_HOTPLUG in linux .config
3) Try to offline and online cpus:
echo 1 > /sys/devices/system/cpu/cpu2/online
echo 0 > /sys/devices/system/cpu/cpu2/online
echo 1 > /sys/devices/system/cpu/cpu2/online
Reported-by: Atish Patra <atishp04@gmail.com>
Cc: Atish Patra <atishp04@gmail.com>
Cc: Alistair Francis <Alistair.Francis@wdc.com>
Signed-off-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
This change checks elf_flags for EF_RISCV_RVE and if
present uses the RVE linux syscall ABI which uses t0
for the syscall number instead of a7.
Warn and exit if a non-RVE ABI binary is run on a
cpu with the RVE extension as it is incompatible.
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Sagar Karandikar <sagark@eecs.berkeley.edu>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Cc: Alistair Francis <Alistair.Francis@wdc.com>
Co-authored-by: Kito Cheng <kito.cheng@gmail.com>
Co-authored-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
We can't allow the supervisor to control SEIP as this would allow the
supervisor to clear a pending external interrupt which will result in
lost a interrupt in the case a PLIC is attached. The SEIP bit must be
hardware controlled when a PLIC is attached.
This logic was previously hard-coded so SEIP was always masked even
if no PLIC was attached. This patch adds riscv_cpu_claim_interrupts
so that the PLIC can register control of SEIP. In the case of models
without a PLIC (spike), the SEIP bit remains software controlled.
This interface allows for hardware control of supervisor timer and
software interrupts by other interrupt controller models.
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Sagar Karandikar <sagark@eecs.berkeley.edu>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Cc: Alistair Francis <Alistair.Francis@wdc.com>
Signed-off-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
The gdb CSR xml file has registers in documentation order, not numerical
order, so we need a table to map the register numbers. This also adds
fairly standard gdb hooks to access xml specified registers.
notice:
The fpu xml from gdb 8.3 has unused register #, 65 and make first
csr register # become 69. We register extra register on gdb to correct
csr offset calculation
Signed-off-by: Jim Wilson <jimw@sifive.com>
Signed-off-by: Chih-Min Chao <chihmin.chao@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
Add a debugger field to CPURISCVState. Add riscv_csrrw_debug function
to set it. Disable mode checks when debugger field true.
Signed-off-by: Jim Wilson <jimw@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20190212230903.9215-1-jimw@sifive.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
This adds some missing CSR_* register macros, and documents some as being
priv v1.9.1 specific.
Signed-off-by: Jim Wilson <jimw@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20190212230830.9160-1-jimw@sifive.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
The RAM device presents a memory region that should be handled
as an IO region and should not be pinned.
In the case of the vfio-pci, RAM device represents a MMIO BAR
and the memory region is not backed by pages hence
KVM_MEMORY_ENCRYPT_REG_REGION fails to lock the memory range.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1667249
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Message-Id: <20190204222322.26766-3-brijesh.singh@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
during the refactor to decodetree we removed the manual decoding that is
necessary for c.jal/c.addiw and removed the translation of c.flw/c.ld
and c.fsw/c.sd. This reintroduces the manual parsing and the
omited implementation.
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Tested-by: Palmer Dabbelt <palmer@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Tested-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
Within a delay slot, we were squishing both DISAS_IAQ_N_STALE and
DISAS_IAQ_N_STALE_EXIT to DISAS_IAQ_N_UPDATED. This lost the
required exit to the main loop, and could result in interrupts
never being delivered.
Tested-by: Sven Schnelle <svens@stackframe.org>
Reported-by: Sven Schnelle <svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
These instructions do not trap when SVE is disabled in EL0,
causing them to be executed with wrong size information.
Signed-off-by: Amir Charif <amir.charif@cea.fr>
Message-id: 1552579248-31025-1-git-send-email-amir.charif@cea.fr
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: added 'target/arm' prefix to subject]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Some generic arch timer registers are Config-RW in the EL0,
which means the EL0 exception level can have write permission
if it is appropriately configured.
When VM access registers, QEMU firstly checks whether they have RW
permission, then check whether it is appropriately configured.
If they are defined to read only in EL0, even though they have been
appropriately configured, they still do not have write permission.
So need to add the write permission according to ARMV8 spec when
define it.
Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
Message-id: 1552395177-12608-1-git-send-email-gengdongjiu@huawei.com
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Bastian: this patchset converts the RISC-V decoder to decodetree in four major steps:
1) Convert 32-bit instructions to decodetree [Patch 1-15]:
Many of the gen_* functions are called by the decode functions for 16-bit
and 32-bit functions. If we move translation code from the gen_*
functions to the generated trans_* functions of decode-tree, we get a lot of
duplication. Therefore, we mostly generate calls to the old gen_* function
which are properly replaced after step 2).
Each of the trans_ functions are grouped into files corresponding to their
ISA extension, e.g. addi which is in RV32I is translated in the file
'trans_rvi.inc.c'.
2) Convert 16-bit instructions to decodetree [Patch 16-18]:
All 16 bit instructions have a direct mapping to a 32 bit instruction. Thus,
we convert the arguments in the 16 bit trans_ function to the arguments of
the corresponding 32 bit instruction and call the 32 bit trans_ function.
3) Remove old manual decoding in gen_* function [Patch 19-29]:
this move all manual translation code into the trans_* instructions of
decode tree, such that we can remove the old decode_* functions.
Palmer: This, with some additional cleanup patches, passed Alistar's
testing on rv32 and rv64 as well as my testing on rv64, so I think it's
good to go. I've run my standard test against this exact tag.
I still don't have a Mac to try this on, sorry!
-----BEGIN PGP SIGNATURE-----
iQJHBAABCAAxFiEEAM520YNJYN/OiG3470yhUCzLq0EFAlyJCVETHHBhbG1lckBk
YWJiZWx0LmNvbQAKCRDvTKFQLMurQeF6D/0UPlrbX7Gq8aPjs/Obca39SzNuQRqc
BsFjy6sKm62iSCsawYRtdclqb0UT+5DaiR9TypoguG+FUrU4aiFTqVUCHkcBuFql
53gk3PGc/neODu9SZxWmDDv5qf7iZaDgngNFOy2zczHiL7+Cw0v0+iLBxNQmDWNI
pGrmLUgYBMLHQl6GouDLrVW0jzVOqPXlgFcRagnmvozFrYE56ArZqTnN/urxVvAM
FhXgNKpbYcAVnDE+ruVqeKcQFgjuGSooBO6wx2dWEhoqlpPKpE0ONZjxNKLjuv1a
MyCUoBowukGENceNAts1wCkIAjRP+rGNgC9c26MH4ZYvnj3ThBsX73iQ56goHnQp
Pc8BbSrftdQYayaG+Ba+rATLOBqvAZekmozzSV6EyqGyJLcnMZYDg+wBH2nhb9dD
wlyYYoKPJFLrhYwn2nYhRplFTMTZ+vAmLxehG6BzRgddfmnaOKAkUP4OiMeQ/PG/
n8dXZUqev+mwPRA0ddxQYxeoxnw11zNJPfvnfXg879SutFdLHb/D3ZfBiTXT8SBp
rMT8pnD0Pyi58MwdBFNas9woS/m8L6/lrMBfJ9VvMDKusPzjpgpdgw2Nf1/EUqQe
cdrsJpTAKhTeXXax/kSSOHWqtXxbKhbOA+GU/BkWr8dCCeZUM9+M20rfWjkj7oyM
FTQH3dfRT36FMw==
=t7se
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/palmer/tags/riscv-for-master-4.0-sf4' into staging
target/riscv: Convert to decodetree
Bastian: this patchset converts the RISC-V decoder to decodetree in four major steps:
1) Convert 32-bit instructions to decodetree [Patch 1-15]:
Many of the gen_* functions are called by the decode functions for 16-bit
and 32-bit functions. If we move translation code from the gen_*
functions to the generated trans_* functions of decode-tree, we get a lot of
duplication. Therefore, we mostly generate calls to the old gen_* function
which are properly replaced after step 2).
Each of the trans_ functions are grouped into files corresponding to their
ISA extension, e.g. addi which is in RV32I is translated in the file
'trans_rvi.inc.c'.
2) Convert 16-bit instructions to decodetree [Patch 16-18]:
All 16 bit instructions have a direct mapping to a 32 bit instruction. Thus,
we convert the arguments in the 16 bit trans_ function to the arguments of
the corresponding 32 bit instruction and call the 32 bit trans_ function.
3) Remove old manual decoding in gen_* function [Patch 19-29]:
this move all manual translation code into the trans_* instructions of
decode tree, such that we can remove the old decode_* functions.
Palmer: This, with some additional cleanup patches, passed Alistar's
testing on rv32 and rv64 as well as my testing on rv64, so I think it's
good to go. I've run my standard test against this exact tag.
I still don't have a Mac to try this on, sorry!
# gpg: Signature made Wed 13 Mar 2019 13:44:49 GMT
# gpg: using RSA key 00CE76D1834960DFCE886DF8EF4CA1502CCBAB41
# gpg: issuer "palmer@dabbelt.com"
# gpg: Good signature from "Palmer Dabbelt <palmer@dabbelt.com>" [unknown]
# gpg: aka "Palmer Dabbelt <palmer@sifive.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 00CE 76D1 8349 60DF CE88 6DF8 EF4C A150 2CCB AB41
* remotes/palmer/tags/riscv-for-master-4.0-sf4: (29 commits)
target/riscv: Remove decode_RV32_64G()
target/riscv: Remove gen_system()
target/riscv: Rename trans_arith to gen_arith
target/riscv: Remove manual decoding of RV32/64M insn
target/riscv: Remove shift and slt insn manual decoding
target/riscv: make ADD/SUB/OR/XOR/AND insn use arg lists
target/riscv: Move gen_arith_imm() decoding into trans_* functions
target/riscv: Remove manual decoding from gen_store()
target/riscv: Remove manual decoding from gen_load()
target/riscv: Remove manual decoding from gen_branch()
target/riscv: Remove gen_jalr()
target/riscv: Convert quadrant 2 of RVXC insns to decodetree
target/riscv: Convert quadrant 1 of RVXC insns to decodetree
target/riscv: Convert quadrant 0 of RVXC insns to decodetree
target/riscv: Convert RV priv insns to decodetree
target/riscv: Convert RV64D insns to decodetree
target/riscv: Convert RV32D insns to decodetree
target/riscv: Convert RV64F insns to decodetree
target/riscv: Convert RV32F insns to decodetree
target/riscv: Convert RV64A insns to decodetree
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
decodetree handles all instructions now so the fallback is not necessary
anymore.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
with all 16 bit insns moved to decodetree no path is falling back to
gen_system(), so we can remove it.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
manual decoding in gen_arith() is not necessary with decodetree. For now
the function is called trans_arith as the original gen_arith still
exists. The former will be renamed to gen_arith as soon as the old
gen_arith can be removed.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
gen_arith_imm() does a lot of decoding manually, which was hard to read
in case of the shift instructions and is not necessary anymore with
decodetree.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
With decodetree we don't need to convert RISC-V opcodes into to MemOps
as the old gen_store() did.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
With decodetree we don't need to convert RISC-V opcodes into to MemOps
as the old gen_load() did.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
We now utilizes argument-sets of decodetree such that no manual
decoding is necessary.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
trans_jalr() is the only caller, so move the code into trans_jalr().
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
we cannot remove the call to gen_arith() in decode_RV32_64G() since it
is used to translate multiply instructions.
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
this splits the 64-bit only instructions into its own decode file such
that we generate the decoder for these instructions only for the RISC-V
64 bit target.
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
for now only LUI & AUIPC are decoded and translated. If decodetree fails, we
fall back to the old decoder.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Peer Adelt <peer.adelt@hni.uni-paderborn.de>
The current code assumes that we don't need to exit the TB
if a Data Cache Flush or Insert has happend. However, as we
have a shared Data/Instruction TLB, a Data cache flush also
flushes Instruction TLB entries, and a Data cache TLB insert
might also evict a Instruction TLB entry.
So exit the TB in all cases if Instruction translation is enabled.
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190311191602.25796-11-svens@stackframe.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190311191602.25796-10-svens@stackframe.org>
[rth: Add required tlb flushing when prot id registers change.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The ODE software calls itlbp on existing TLB entries without
calling itlba first, so this seems to be valid.
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190311191602.25796-9-svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
b,gate does GR[t] ← cat(GR[t]{0..29},IAOQ_Front{30..31});
instead of saving the link address to register t.
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190311191602.25796-8-svens@stackframe.org>
[rth: Move link check outside of ifndef CONFIG_USER_ONLY;
use ctx->privilege; nullify the insn earlier.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
DIAG is usually only used by diagnostics software as it's CPU
specific. In most of the cases it's better to ignore it and log
a message that it's not implemented.
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190311191602.25796-7-svens@stackframe.org>
[rth: Free the nullify condition.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
HP ODE use rfi to set the Q bit, and i don't see anything in the
documentation that this is forbidden. So remove it.
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190311191602.25796-6-svens@stackframe.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
To ease TLB debugging add a few trace events, which are disabled
by default so that there's no performance impact.
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190311191602.25796-5-svens@stackframe.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190311191602.25796-4-svens@stackframe.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Assume the following sequence:
pitlbe r0(sr0,r0)
iitlba r4,(sr0,r0)
ldil L%3000000,r5
iitlbp r5,(sr0,r0)
This will purge the whole TLB and add an entry for page 0. However
the current TLB implementation in helper_iitlba() will store to
the last empty TLB entry, while helper_iitlbp() will write to the
first empty entry. That is because an empty entry will match address
0 in helper_iitlba()
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190311191602.25796-3-svens@stackframe.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
When one of the source registers is the same as the destination register,
the source register gets overwritten with the destionation value before
do_add_sv() is called, which leads to unexpection condition matches.
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190311191602.25796-2-svens@stackframe.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We got away with eliding this check when target/hppa was user-only,
but missed adding this check when adding system support.
Fixes an early crash in the HP-UX 11 installer.
Reported-by: Sven Schnelle <svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The qemu coding standard is to use CamelCase for type and structure names,
and the pseries code follows that... sort of. There are quite a lot of
places where we bend the rules in order to preserve the capitalization of
internal acronyms like "PHB", "TCE", "DIMM" and most commonly "sPAPR".
That was a bad idea - it frequently leads to names ending up with hard to
read clusters of capital letters, and means they don't catch the eye as
type identifiers, which is kind of the point of the CamelCase convention in
the first place.
In short, keeping type identifiers look like CamelCase is more important
than preserving standard capitalization of internal "words". So, this
patch renames a heap of spapr internal type names to a more standard
CamelCase.
In addition to case changes, we also make some other identifier renames:
VIOsPAPR* -> SpaprVio*
The reverse word ordering was only ever used to mitigate the capital
cluster, so revert to the natural ordering.
VIOsPAPRVTYDevice -> SpaprVioVty
VIOsPAPRVLANDevice -> SpaprVioVlan
Brevity, since the "Device" didn't add useful information
sPAPRDRConnector -> SpaprDrc
sPAPRDRConnectorClass -> SpaprDrcClass
Brevity, and makes it clearer this is the same thing as a "DRC"
mentioned in many other places in the code
This is 100% a mechanical search-and-replace patch. It will, however,
conflict with essentially any and all outstanding patches touching the
spapr code.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20190309214255.9952-3-f4bug@amsat.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The t0 tcg_temp register is now unused, remove it.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20190309214255.9952-2-f4bug@amsat.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We now have enough support to boot a PowerNV machine with a POWER9
processor. Allow HV mode on POWER9.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190307223548.20516-16-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Now that all VSX registers are stored in host endian order, there is no need
to go via different accessors depending upon the register number. Instead we
introduce vsr64_offset() and use it directly from within get_cpu_vsr{l,h}() and
set_cpu_vsr{l,h}().
This also allows us to rewrite avr64_offset() and fpr_offset() in terms of the
new vsr64_offset() function to more clearly express the relationship between the
VSX, FPR and VMX registers, and also remove vsrl_offset() which is no longer
required.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-Id: <20190307180520.13868-8-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
When VSX support was initially added, the fpr registers were added at
offset 0 of the VSR register and the vsrl registers were added at offset
1. This is in contrast to the VMX registers (the last 32 VSX registers) which
are stored in host-endian order.
Switch the fpr/vsrl registers so that the lower 32 VSX registers are now also
stored in host endian order to match the VMX registers. This ensures that TCG
vector operations involving mixed VMX and VSX registers will function
correctly.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190307180520.13868-7-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
By using the VsrD macro in avr64_offset() the same offset calculation can be
used regardless of the host endian. This allows get_avr64() and set_avr64() to
be simplified accordingly.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-Id: <20190307180520.13868-6-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
All TCG vector operations require pointers to the base address of the vector
rather than separate access to the top and bottom 64-bits. Convert the VMX TCG
instructions to use a new avr_full_offset() function instead of avr64_offset()
which can then itself be written as a simple wrapper onto vsr_full_offset().
This same function can also reused in cpu_avr_ptr() to avoid having more than
one copy of the offset calculation logic.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-Id: <20190307180520.13868-5-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
It isn't possible to include internal.h from cpu.h so move the Vsr* macros
into cpu.h alongside the other VMX/VSX register access functions.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-Id: <20190307180520.13868-4-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Instead of having multiple copies of the offset calculation logic, move it to a
single vsrl_offset() function.
This commit also renames the existing get_vsr()/set_vsr() functions to
get_vsrl()/set_vsrl() which better describes their purpose.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190307180520.13868-3-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Instead of having multiple copies of the offset calculation logic, move it to a
single fpr_offset() function.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190307180520.13868-2-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The H_CALL H_PAGE_INIT can be used to zero or copy a page of guest
memory. Enable the in-kernel H_PAGE_INIT handler.
The in-kernel handler takes half the time to complete compared to
handling the H_CALL in userspace.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Message-Id: <20190306060608.19935-1-sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
There are four scenarios being handled in this function:
- single stepping
- hardware breakpoints
- software breakpoints
- fallback (no debug supported)
A future patch will add code to handle specific single step and
software breakpoints cases so let's split each scenario into its own
function now to avoid hurting readability.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <20190228225759.21328-5-farosas@linux.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This is in preparation for a refactoring of the kvm_handle_debug
function in the next patch.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20190228225759.21328-4-farosas@linux.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Introduce a new spapr_cap SPAPR_CAP_CCF_ASSIST to be used to indicate
the requirement for a hw-assisted version of the count cache flush
workaround.
The count cache flush workaround is a software workaround which can be
used to flush the count cache on context switch. Some revisions of
hardware may have a hardware accelerated flush, in which case the
software flush can be shortened. This cap is used to set the
availability of such hardware acceleration for the count cache flush
routine.
The availability of such hardware acceleration is indicated by the
H_CPU_CHAR_BCCTR_FLUSH_ASSIST flag being set in the characteristics
returned from the KVM_PPC_GET_CPU_CHAR ioctl.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Message-Id: <20190301031912.28809-2-sjitindarsingh@gmail.com>
[dwg: Small style fixes]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The spapr_cap SPAPR_CAP_IBS is used to indicate the level of capability
for mitigations for indirect branch speculation. Currently the available
values are broken (default), fixed-ibs (fixed by serialising indirect
branches) and fixed-ccd (fixed by diabling the count cache).
Introduce a new value for this capability denoted workaround, meaning that
software can work around the issue by flushing the count cache on
context switch. This option is available if the hypervisor sets the
H_CPU_BEHAV_FLUSH_COUNT_CACHE flag in the cpu behaviours returned from
the KVM_PPC_GET_CPU_CHAR ioctl.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Message-Id: <20190301031912.28809-1-sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Implement support to allow KVM guests to take advantage of the large
decrementer introduced on POWER9 cpus.
To determine if the host can support the requested large decrementer
size, we check it matches that specified in the ibm,dec-bits device-tree
property. We also need to enable it in KVM by setting the LPCR_LD bit in
the LPCR. Note that to do this we need to try and set the bit, then read
it back to check the host allowed us to set it, if so we can use it but
if we were unable to set it the host cannot support it and we must not
use the large decrementer.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190301024317.22137-3-sjitindarsingh@gmail.com>
[dwg: Small style fixes]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Prior to POWER9 the decrementer was a 32-bit register which decremented
with each tick of the timebase. From POWER9 onwards the decrementer can
be set to operate in a mode called large decrementer where it acts as a
n-bit decrementing register which is visible as a 64-bit register, that
is the value of the decrementer is sign extended to 64 bits (where n is
implementation dependant).
The mode in which the decrementer operates is controlled by the LPCR_LD
bit in the logical paritition control register (LPCR).
>From POWER9 onwards the HDEC (hypervisor decrementer) was enlarged to
h-bits, also sign extended to 64 bits (where h is implementation
dependant). Note this isn't configurable and is always enabled.
On POWER9 the large decrementer and hdec are both 56 bits, as
represented by the lrg_decr_bits cpu class property. Since they are the
same size we only add one property for now, which could be extended in
the case they ever differ in the future.
We also add the lrg_decr_bits property for POWER5+/7/8 since it is used
to determine the size of the hdec, which is only generated on the
POWER5+ processor and later. On these processors it is 32 bits.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190301024317.22137-2-sjitindarsingh@gmail.com>
[dwg: Small style fixes]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Intel Processor Trace required CPUID[0x14] but the cpuid_level
have no change when create a kvm guest with
e.g. "-cpu qemu64,+intel-pt".
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Message-Id: <1548805979-12321-1-git-send-email-luwei.kang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The CPUID code will call kvm_arch_get_supported_cpuid() and, even though
it is undef kvm_enabled() so it never runs for user-mode emulators,
sometimes clang will not optimize it out at -O0.
That could be considered a compiler bug, however at -O0 we give it
a pass and just add the stubs.
Reported-by: Kamil Rytarowski <n54@gmx.com>
Tested-by: Kamil Rytarowski <n54@gmx.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Combine all variant in a single handler. As source and destination
have different element sizes, we can't use gvec expansion. Expand
manually. Also watch out for overlapping source and destination
registers. Use a safe evaluation order depending on the operation.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-33-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Very similar to VECTOR LOAD WITH LENGTH, just the opposite direction.
Properly probe write access before modifying memory.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-32-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Similar to VECTOR LOAD MULTIPLE, just the opposite direction. Probe
write access first.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-31-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
As we only store one element, there is nothing to consider regarding
exceptions.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-30-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Instead of checking e.g. the first access on every touched page, we should
check the actual access, otherwise we might get false positives when Low
Address Protection (LAP) is active. As probe_write() can only deal with
accesses to one page, we have to loop.
Use i64 for the length, although not needed - easier to reuse
TCG temps we already have in the translation functions where this will
be used. Also allow it to be used from other helpers.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-28-david@redhat.com>
[CH: add missing page_check_range()]
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Load both elements signed and store them into the two 64 bit elements.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-27-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Provide an implementation based on i64 and on real host vectors.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-26-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Similar to VECTOR GATHER ELEMENT, but the other direction.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-25-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Like VECTOR REPLICATE, but the element to be replicated comes from an
immediate.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-24-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Replicate via the special gvec helper.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-23-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Read the whole input before modifying the destination vector.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-22-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Take care of overlying inputs and outputs by using a temporary vector.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-21-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
This is a big one. Luckily we only have a limited set of such nasty
instructions.
We'll implement all variants with helpers, except when sources and
the destination don't overlap for VECTOR PACK. Provide different helpers
when the cc is to be modified. We'll return the cc then via env->cc_op.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-20-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We cannot use gvec expansion as source and destination elements are
have different element numbers. So we'll expand using a fancy loop.
Also, we have to take care of overlapping source and destination
registers, therefore use a safe evaluation irder depending on the
operation.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-19-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We can reuse the helper introduced along with VECTOR LOAD TO BLOCK
BOUNDARY. We just have to take care of converting the highest index into
a length.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-18-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Fairly easy, just load from to gprs into a single vector.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-17-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Very similar to VECTOR LOAD GR FROM VR ELEMENT, just the opposite
direction. Also provide a fast path in case we don't care about the
register content.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-16-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Very similar to LOAD COUNT TO BLOCK BOUNDARY, but instead of only
calculating, the actual vector is loaded. Use a temporary vector to
not modify the real vector on exceptions. Initialize that one to zero,
to not leak any data. Provide a fast path if we're loading a full
vector.
As we don't have gvec ool handlers for single vectors, just calculate
the vector address manually.
We can reuse the helper later on for VECTOR LOAD WITH LENGTH. In fact,
we are going to name it "vll" right from the beginning, because that's
a better match.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-15-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Try to load the last element first. Access to the first element will
be checked afterwards. This way, we can guarantee that the vector is
not modified before we checked for all possible exceptions. (16 vectors
cannot cross more than two pages)
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-14-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Fairly easy, zero out the vector before we load the desired element.
Load the element before touching the vector.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-13-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
To avoid an helper, we have to do the actual calculation of the element
address (offset in cpu_env + cpu_env) manually. Factor that out into
get_vec_element_ptr_i64(). The same logic will be reused for "VECTOR
LOAD VR ELEMENT FROM GR".
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-12-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Take care of properly sign-extending the immediate.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-11-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Fairly easy, load with desired size and store it into the right element.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-10-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We can use tcg_gen_gvec_dup_i64() to carry out the duplication.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-9-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
When loading from memory, load both elements into temps first before
modifying the target vector
Loading with strange alingment from the end of the address space will
not properly wrap, we can ignore that for now.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-8-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Add gen_gvec_dupi() for handling duplication of immediates, so it can
be reused later.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-7-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Let's optimize it for the common cases (setting a vector to zero or all
ones) - courtesy of Richard.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-6-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Let's start with a more involved one, but it is the first in the list
of vector support instructions (introduced with the vector facility).
Good thing is, we need a lot of basic infrastructure for this. Reading
and writing vector elements as well as checking element validity.
All vector instruction related translation functions will reside in
translate_vx.inc.c, to be included in translate.c - similar to how
other architectures handle it.
While at it, directly add some documentation (which contains parts about
things added in follow-up patches, but splitting this up does not make
too much sense). Also add ES_* defines heavily used later.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-5-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We'll have to read/write vector elements quite frequently from helpers.
The tricky bit is properly taking care of endianess. Handle it similar
to aarch64.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-4-david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Check them at a central point. We'll use a new instruction flag to
flag all vector instructions (IF_VEC) and handle it very similar to
AFP, whereby we use another unused position in the PSW mask to store
the state of vector register enablement per translation block.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-3-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
These are the new instruction formats related to vector instructions as
up to the z14 (a.k.a. latest PoP).
As v2 appeares (like x2 in VRX) with d2/b2 in VRV, we have to assign it a
higher field number to avoid collisions.
Properly take care of the MSB (to be able to address 32 registers) for
each vector register field stored in the RXB field (Bit 36 - 30 for all
vector instructions). As we have 32 bit vector registers and the
"v" fields are only 4 bit in size, the 5th bit is stored in the RXB.
We use a new type to indicate that the MSB has to be fetched from the
RXB.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190307121539.12842-2-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
There are some fields in our struct LowCore which apparently have
been copied from a very old version of the Linux kernel. These
fields are not architected in the "Principles of Operation", and
only used on these memory locations in Linux kernels older than
2.6.29. Newer Linux kernels moved the entries to different locations
or are not using them at all anymore. Thus we should never access
these fields from the QEMU side, so they should be removed.
While we're at it, also add a QEMU_BUILD_BUG_ON() statement to
assert that struct LowCore has the right size.
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1551775581-27989-1-git-send-email-thuth@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We can eliminate an extra TB in this case, which merely
loads a "return address" into rn.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
For priv levels 1 & 2, we were doing so from do_ibranch_priv.
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Add the kvm_arm_get_max_vm_ipa_size() helper that returns the
number of bits in the IPA address space supported by KVM.
This capability needs to be known to create the VM with a
specific IPA max size (kvm_type passed along KVM_CREATE_VM ioctl.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Message-id: 20190304101339.25970-6-eric.auger@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This will allow sharing code that adjusts rmode beyond
the existing users.
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190301200501.16533-10-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This decoding more closely matches the ARMv8.4 Table C4-6,
Encoding table for Data Processing - Register Group.
In particular, op2 == 0 is now more than just Add/sub (with carry).
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190301200501.16533-7-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We do not need an out-of-line helper for manipulating bits in pstate.
While changing things, share the implementation of gen_ss_advance.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190301200501.16533-6-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The EL0+UMA check is unique to DAIF. While SPSel had avoided the
check by nature of already checking EL >= 1, the other post v8.0
extensions to MSR (imm) allow EL0 and do not require UMA. Avoid
the unconditional write to pc and use raise_exception_ra to unwind.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190301200501.16533-5-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190301200501.16533-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190301200501.16533-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Minimize the number of places that will need updating when
the virtual host extensions are added.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190301200501.16533-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Found by inspection: Rn is the base register against which the
load began; I is the register within the mask being processed.
The exception return should of course be processed from the loaded PC.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190301202921.21209-1-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The floating-point extension facility implemented certain changes to
BFP, HFP and DFP instructions.
As we don't implement HFP/DFP, we can ignore those completely. Related
to BFP, the changes include
- SET BFP ROUNDING MODE (SRNMB) instruction
- BFP-rounding-mode field in the FPC register is changed to 3 bits
- CONVERT FROM LOGICAL instructions
- CONVERT TO LOGICAL instructions
- Changes (rounding mode + XxC) added to
-- CONVERT TO FIXED
-- CONVERT FROM FIXED
-- LOAD FP INTEGER
-- LOAD ROUNDED
-- DIVIDE TO INTEGER
For TCG, we don't implement DIVIDE TO INTEGER, and it is harder to
implement, so skip that. Also, as we don't implement PFPO, we can skip
changes to that as well. The other parts are now implemented, we can
indicate the facility.
z14 PoP mentions that "The floating-point extension facility is installed
in the z/Architecture architectural mode. When bit 37 is one, bit 42 is
also one.", meaning that the DFP (decimal-floating-point) facility also
has to be indicated. We can ignore that for now.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-16-david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
"round to nearest with ties away from 0" maps to float_round_ties_away.
"round to prepare for shorter precision" maps to float_round_to_odd.
As all instructions properly check for valid rounding modes in translate.c
we can add an assert. Fix one missing empty line.
Cc: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-15-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
With the floating-point extension facility, LOAD ROUNDED has
a rounding mode specification and the inexact-exception control (XxC).
Handle them just like e.g. LOAD FP INTEGER.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-14-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
With the floating-point extension facility
- CONVERT FROM LOGICAL
- CONVERT TO LOGICAL
- CONVERT TO FIXED
- CONVERT FROM FIXED
- LOAD FP INTEGER
have both, a rounding mode specification and the inexact-exception control
(XxC). Other instructions will be handled separatly.
Check for valid rounding modes and forward also the XxC (via m4). To avoid
a lot of boilerplate code and changes to the helpers, combine both, the
m3 and m4 field in a combined 32 bit TCG variable. Perform checks at
a central place, taking in account if the m3 or m4 field was ignore
before the floating-point extension facility was introduced.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-13-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Some instructions allow to suppress IEEE inexact exceptions.
z14 PoP, 9-23, "Suppression of Certain IEEE Exceptions"
IEEE-inexact-exception control (XxC): Bit 1 of
the M4 field is the XxC bit. If XxC is zero, recogni-
tion of IEEE-inexact exception is not suppressed;
if XxC is one, recognition of IEEE-inexact excep-
tion is suppressed.
Especially, handling for overflow/unerflow remains as is, inexact is
reported along
z14 PoP, 9-23, "Suppression of Certain IEEE Exceptions"
For example, the IEEE-inexact-exception control (XxC)
has no effect on the DXC; that is, the DXC for IEEE-
overflow or IEEE-underflow exceptions along with the
detail for exact, inexact and truncated, or inexact and
incremented, is reported according to the actual con-
dition.
Follow up patches will wire it correctly up for the applicable
instructions.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-12-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We want to reuse this in the context of vector instructions. So use
better matching names and introduce s390_restore_bfp_rounding_mode().
While at it, add proper newlines.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-11-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Let's split handling of BFP/DFP rounding mode configuration. Also,
let's not reuse the sfpc handler, use a separate handler so we can
properly check for specification exceptions for SRNMB.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-10-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We already forward the 3 bits correctly in the translation functions. We
also have to handle them properly and check for specification
exceptions.
Setting an invalid rounding mode (BFP only, all DFP rounding modes)
results in a specification exception. Setting unassigned bits in the
fpc, results in a specification exception.
This fixes LOAD FPC (AND SIGNAL), SET FPC (AND SIGNAL). Also for,
SET BFP ROUNDING MODE, 3-bit rounding mode is now explicitly checked.
Note: TCG_CALL_NO_WG is required for sfpc handler, as we now inject
exceptions.
We won't be modeling abscence of the "floating-point extension facility"
for now, not necessary as most take the facility for granted without
checking.
z14 PoP, 9-23, "LOAD FPC"
When the floating-point extension facility is
installed, bits 29-31 of the second operand must
specify a valid BFP rounding mode and bits 6-7,
14-15, 24, and 28 must be zero; otherwise, a
specification exception is recognized.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-9-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The trap is triggered based on priority of the enabled signaling flags.
Only overflow and underflow allow a concurrent inexact exception.
z14 PoP, 9-33, Figure 9-21
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-8-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We can directly work on the uint64_t value, no need for a temporary
uint32_t value.
Also cleanup and shorten the comments.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-7-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
IEEE underflows are not reported when the mask bit is off and we don't
also have an inexact exception.
z14 PoP, 9-20, "IEEE Underflow":
An IEEE-underflow exception is recognized for an
IEEE target when the tininess condition exists and
either: (1) the IEEE-underflow mask bit in the FPC
register is zero and the result value is inexact, or (2)
the IEEE-underflow mask bit in the FPC register is
one.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-6-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Many things are wrong and some parts cannot be fixed yet. Fix what we
can fix easily and add two FIXMEs:
The fpc flags are not updated in case an exception is actually injected.
Inexact exceptions have to be handled separately, as they are the only
exceptions that can coexist with underflows and overflows.
I reread the horribly complicated chapters in the PoP at least 5 times
and hope I got it right.
For references:
- z14 PoP, 9-18, "IEEE Exceptions"
- z14 PoP, 19-9, Figure 19-8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-5-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We want to reuse that function in vector instruction context. While at it,
cleanup the code, using defines for magic values and avoiding the
handcrafted bit conversion.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-4-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Let's use the proper conversion functions now that we have them.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-3-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Let's detect normal and denormal ("subnormal") numbers reliably. Also
test for quiet NaN's. As only one class is possible, test common cases
first.
While at it, use a better check to test for the mask bits in the data
class mask. The data class mask has 12 bits, whereby bit 0 is the
leftmost bit and bit 11 the rightmost bit. In the PoP an easy to read
table with the numbers is provided for the VECTOR FP TEST DATA CLASS
IMMEDIATE instruction, the table for TEST DATA CLASS is more confusing
as it is based on 64 bit values.
Factor the checks out into separate functions, as they will also be
needed for floating point vector instructions. We can use a makro to
generate the functions.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-2-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Use a new CC helper to calculate the CC lazily if needed. While the
PoP mentions that "A 32-bit unsigned binary integer" is placed into the
first operand, there is no word telling that the other 32 bits (high
part) are left untouched. Maybe the other 32-bit are unpredictable.
So store 64 bit for now.
Bit magic courtesy of Richard.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190225200318.16102-8-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Nice trick to load a 32 bit value into vector element 0 (32 bit element
size) from memory, zeroing out element1. The short HFP to long HFP
conversion really only is a shift.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190225200318.16102-7-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Also properly wrap in 24bit mode. While at it, convert the comment (and
drop the comment about fundamental TCG optimizations).
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190225200318.16102-6-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We'll use that a lot along with gvec helpers, to calculate the start
address of a vector.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190225200318.16102-5-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We will use s390x speak "Element Size" (es) for MO_8 == 0, MO_16 == 1
... Simple rename of variables.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190225200318.16102-4-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Will be needed, so add it to the format description.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190225200318.16102-2-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
If we have vector registers and the designation is not zero, we have
to try to write the vector registers. If the designation is zero or
if storing fails, we must not indicate validity. s390_build_validity_mcic()
automatically already sets validity if the vector instruction facility
is installed.
As long as we don't support the guarded-storage facility, the alignment
and size of the area is always 1024 bytes.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190222081153.14206-4-david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Convert this to QEMU style.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190222081153.14206-3-david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
As we will support vector instructions soon, and vector registers are
stored in 64bit host chunks, let's use cpu_to_be64. Same applies to the
guarded storage control block.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190222081153.14206-2-david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
* add MHU and dual-core support to Musca boards
* refactor some VFP insns to be gated by ID registers
* Revert "arm: Allow system registers for KVM guests to be changed by QEMU code"
* Implement ARMv8.2-FHM extension
* Advertise JSCVT via HWCAP for linux-user
-----BEGIN PGP SIGNATURE-----
iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAlx3wM8ZHHBldGVyLm1h
eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3t+yD/4hbg4UCNDNHvnHv5N0dwVo
xDnEwN8Ath5jhcIlwjB4sPg44wO1dTy9PXK75UskGbUXnJfl4VFQsTVOg6GELVPc
RJJ7S1hBjaipRxaS7tgBl+sE03JFSFniGaYuU5cpwxh62HWlZRBZ85+Pw3iNb9So
UgrnQeThPNb9STKt2x0T8TvgjmwuS6fRYqA0DSVqUWT7FRNgIpfJ+dVkGxAhC8Mh
YJVmLfR1Z/HS3lWRHkZHDBkv036by7XnrRdTEb7yftNflmFHaX0OdSO/4+Uueslf
Lz9uem7LUOwnz9x0tBDSdaUrfJ4hmJSNXZhoeINR0V4MUKQBVWvRUrlfymRlFL15
SlI7i19FS0OleFTZs26TflGutgLwvMTRzAvhVR/F+pBqlYs1UxvNk4eMPLZFYPuc
OlRsgoUUtmF722TjW2l+Uewixo22AMatyv9VsiR6Ut7etmLIj8HHABkDX5kQbqFc
wz60pkUvPcywGGATaMImQJ+uoHOTXZhegBPyfYZYhbTVXshjvEYxFSLtmfhoyVAo
SyUUhsQyu4KGRVm4zGXKQuAPALElaDcKJ/T1H11pobMrCgM48C3br3EGsSZyOEFp
2A7ulT73sYL+7EjQ2fS/4kTXUGOiIWijo1oR9ANvqYbcROQiKDYsl023oEz0dpVY
n2tWg1Gzt/KjeM0md8B/Lg==
=7MnB
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20190228-1' into staging
target-arm queue:
* add MHU and dual-core support to Musca boards
* refactor some VFP insns to be gated by ID registers
* Revert "arm: Allow system registers for KVM guests to be changed by QEMU code"
* Implement ARMv8.2-FHM extension
* Advertise JSCVT via HWCAP for linux-user
# gpg: Signature made Thu 28 Feb 2019 11:06:55 GMT
# gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg: issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate]
# gpg: aka "Peter Maydell <pmaydell@gmail.com>" [ultimate]
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate]
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* remotes/pmaydell/tags/pull-target-arm-20190228-1:
linux-user: Enable HWCAP_ASIMDFHM, HWCAP_JSCVT
target/arm: Enable ARMv8.2-FHM for -cpu max
target/arm: Implement VFMAL and VFMSL for aarch32
target/arm: Implement FMLAL and FMLSL for aarch64
target/arm: Add helpers for FMLAL
Revert "arm: Allow system registers for KVM guests to be changed by QEMU code"
target/arm: Gate "miscellaneous FP" insns by ID register field
target/arm: Use MVFR1 feature bits to gate A32/T32 FP16 instructions
hw/arm/armsse: Unify init-svtor and cpuwait handling
hw/arm/iotkit-sysctl: Implement CPUWAIT and INITSVTOR*
hw/arm/iotkit-sysctl: Add SSE-200 registers
hw/misc/iotkit-sysctl: Correct typo in INITSVTOR0 register name
target/arm/arm-powerctl: Add new arm_set_cpu_on_and_reset()
target/arm/cpu: Allow init-svtor property to be set after realize
hw/arm/armsse: Wire up the MHUs
hw/misc/armsse-mhu.c: Model the SSE-200 Message Handling Unit
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJcdpBIAAoJENSXKoln91plXDcH/377ByoCFuKu0uYN0f8m3eJ5
wCwOrcwExM36vga/zaMCkkj44TbrzpNtjeo/frBn+8pabFDpfF6NOXlBSC+CE/hg
i3G4Wm09GeNOyPH9JIdvItE1LvL3EEOf10pbheNdv6PeuFPRnUAV4pyQ/Rcu9USC
7pAwIJvR3GYXAEhsqa8sKbbuCBq1oiFXWpsEuBNwybWKgdVEpia6IJVYDi+xwnVc
FcpMF7BAqZDIX13kCSIgOAaa/XCKRFgxUYnZMd3bwD9m+x3iC442eS7Idx/HyXXK
5HDeyubff8bMKBzTUWFuM1J8t0uuQsRDqR61WptQ0rxf5Qf9Uiv9OcAww+LIENI=
=9h9l
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/amarkovic/tags/mips-queue-feb-27-2019' into staging
MIPS queue for February 27th, 2019
# gpg: Signature made Wed 27 Feb 2019 13:27:36 GMT
# gpg: using RSA key D4972A8967F75A65
# gpg: Good signature from "Aleksandar Markovic <amarkovic@wavecomp.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 8526 FBF1 5DA3 811F 4A01 DD75 D497 2A89 67F7 5A65
* remotes/amarkovic/tags/mips-queue-feb-27-2019:
target/mips: Preparing for adding MMI instructions
tests/tcg: target/mips: Add tests for MSA integer max/min instructions
tests/tcg: target/mips: Add wrappers for MSA integer max/min instructions
qemu-doc: Add section on MIPS' Boston board
qemu-doc: Add section on MIPS' Fulong 2E board
qemu-doc: Move section on MIPS' mipssim pseudo board
disas: nanoMIPS: Fix a function misnomer
tests/tcg: target/mips: Add tests for MSA integer compare instructions
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Load/store opcodes may raise MMU exceptions. Normally exceptions should
be checked in priority order before any actual operations, but since MMU
exceptions are tightly coupled with actual memory access, there's
currently no way to do it.
Approximate this behavior by executing all load, then all store, and
then all other opcodes in the FLIX bundles. Use opcode dependency
mechanism to express ordering. Mark load/store opcodes with
XTENSA_OP_{LOAD,STORE} flags. Newer libisa has classifier functions that
can tell whether opcode is a load or store, but this information is not
available in the existing overlays.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Currently topologic opcode sorting stops at the first detected
dependency loop. Introduce struct opcode_arg_copy that describes
temporary register copy. Scan remaining opcodes searching for
dependencies that can be broken, break them by introducing temporary
register copies and record them in an array. In case of success
create local temporaries and initialize them with current register
values. Share single temporary copy between all register users. Delete
temporaries after translation.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
libisa represents boolean registers b0..b16 as a BR register file and as
BR4 and BR8 register groups. Add these register files and use
OpcodeArg::{in,out} parameters to access boolean registers in
translators.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
libisa represents MAC16 registers m0..m3 as an MR register file. Add
this register file and reference its registers directly from the
translate_mac16. Drop translator parameter that indicates whether opcode
argument is in ar or in mr.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
To support circular register dependencies in FLIX bundles opcode inputs
and outputs must be separate and adjustable. Circular dependencies can
be broken by making temporary copies of opcode inputs and substituting
them into the arguments array instead of the original registers.
E.g. the circular register dependency in the following bundle:
{ mov a2, a3 ; mov a3, a2 }
can be resolved by making copy a2' = a2 and substituting it as input
argument of the second opcode:
{ mov a2, a3 ; mov a3, a2' }
Change opcode translator prototype to accept OpcodeArg array as
argument. For each register argument initialize OpcodeArg::{in,out} with
TCGv_* of the respective register. Don't explicitly use cpu_R in the
opcode translators, use OpcodeArg::{in,out} instead.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Move return address calculation and WINDOW_START adjustment out of the
retw helper to simplify logic a bit and avoid using registers directly.
Pass a0 as a parameter to the helper.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Opcodes that modify WINDOW_BASE SR don't have dependency on opcodes that
use windowed registers. If such opcodes are combined in a single
instruction they may not be correctly ordered. Instead of adding said
dependency use temporary register to store changed WINDOW_BASE value and
do actual register window rotation as a postprocessing step.
Not all opcodes that change WINDOW_BASE need this: retw, rfwo and rfwu
are also jump opcodes, so they are guaranteed to be translated last and
thus will not affect other opcodes in the same instruction.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>