The vpm0 bit was removed from the LPCR in POWER9, this bit controlled
whether ISI and DSI interrupts were directed to the hypervisor or the
partition. These interrupts now go to the hypervisor irrespective, thus
it is no longer necessary to check the vmp0 bit in the LPCR.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The logical partitioning control register controls a threads operation
based on the partition it is currently executing. Add new definitions and
update the mask used when writing to the LPCR based on the POWER9 spec.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
POWER9 processors implement the mmu as defined in version 3.00 of the ISA.
Add a definition for this mmu model and set the POWER9 cpu model to use
this mmu model.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The DPFD field in the LPCR is 3 bits wide. This has always been defined
as 0x3 << shift which indicates a 2 bit field, which is incorrect.
Correct this.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xscvqpudz: VSX Scalar truncate & Convert Quad-Precision format to
Unsigned Doubleword format
xscvqpuwz: VSX Scalar truncate & Convert Quad-Precision format to
Unsigned Word format
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xsaddqpo: VSX Scalar Add Quad-Precision using round to Odd
xsmulqo: VSX Scalar Multiply Quad-Precision using round to Odd
xsdivqpo: VSX Scalar Divide Quad-Precision using round to Odd
xscvqpdpo: VSX Scalar round & Convert Quad-Precision format to
Double-Precision format using round to Odd
xssqrtqpo: VSX Scalar Square Root Quad-Precision using round to Odd
xssubqpo: VSX Scalar Subtract Quad-Precision using round to Odd
In addition, fix the invalid bitmask in the instruction encoding
of xssqrtqp[o].
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
CC: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Use the available wait instruction implementation.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
slbsync: SLB Synchoronize
The instruction provides an ordering function for the effects of all
slbieg instructions executed by the thread executing the slbsync
instruction.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
slbieg: SLB Invalidate Entry Global
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
stwat: Store Word Atomic
stdat: Store Doubleword Atomic
The instruction includes as function code (5 bits) which gives a detail
on the operation to be performed. The patch implements five such
functions.
Signed-off-by: Balamuruhan S <bala24@linux.vnet.ibm.com>
Signed-off-by: Harish S <harisrir@linux.vnet.ibm.com>
Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
[ implement stdat, use macro and combine both implementation ]
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
lwat: Load Word Atomic
ldat: Load Doubleword Atomic
The instruction includes as function code (5 bits) which gives a detail
on the operation to be performed. The patch implements five such
functions.
Signed-off-by: Balamuruhan S <bala24@linux.vnet.ibm.com>
Signed-off-by: Harish S <harisrir@linux.vnet.ibm.com>
Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
[ combine both lwat/ldat implementation using macro ]
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Provide a new cpu_supports_isa function which allows callers to
determine whether a CPU supports one of the ISA_ flags, by testing
whether the associated struct mips_def_t sets the ISA flags in its
insn_flags field.
An example use of this is to allow boards which generate bootloader code
to determine the properties of the CPU that will be used, for example
whether the CPU is 64 bit or which architecture revision it implements.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: Leon Alrae <leon.alrae@imgtec.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
When running certain HMP commands ("info registers", "info cpustats",
"info tlb", "nmi", "memsave" or dumping virtual memory) with the "none"
machine, QEMU crashes with a segmentation fault. This happens because the
"none" machine does not have any CPUs by default, but these HMP commands
did not check for a valid CPU pointer yet. Add such checks now, so we get
an error message about the missing CPU instead.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1484309555-1935-1-git-send-email-thuth@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Commit 2afbdf8 ("target-i386: exception handling for memory helpers",
2015-09-15) changed tlb_fill's cpu_restore_state+raise_exception_err
to raise_exception_err_ra. After this change, the cpu_restore_state
and raise_exception_err's cpu_loop_exit are merged into
raise_exception_err_ra's cpu_loop_exit_restore.
This actually fixed some bugs, but when SVM is enabled there is a
second path from raise_exception_err_ra to cpu_loop_exit. This is
the VMEXIT path, and now cpu_vmexit is called without a
cpu_restore_state before.
The fix is to pass the retaddr to cpu_vmexit (via
cpu_svm_check_intercept_param). All helpers can now use GETPC() to pass
the correct retaddr, too.
Cc: qemu-stable@nongnu.org
Fixes: 2afbdf8480
Reported-by: Alexander Boettcher <alexander.boettcher@genode-labs.com>
Tested-by: Alexander Boettcher <alexander.boettcher@genode-labs.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
it's not very convenient to use the crash-information property interface,
so provide a CPU class callback to get the guest crash information, and pass
that information in the event
Signed-off-by: Anton Nefedov <anton.nefedov@virtuozzo.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
Message-Id: <1487053524-18674-3-git-send-email-den@openvz.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Windows reports BSOD parameters through Hyper-V crash MSRs. This
information is very useful for initial crash analysis and thus
it would be nice to have a way to fetch it.
Signed-off-by: Anton Nefedov <anton.nefedov@virtuozzo.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
Message-Id: <1487053524-18674-2-git-send-email-den@openvz.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The HW does not special-case r0, but the ABI specifies that r0 should
contain 0. If we expose this fact to the optimizer, we can simplify
a lot of the generated code. We must of course verify that r0==0, but
that is trivial to do with a TB flag.
Signed-off-by: Richard Henderson <rth@twiddle.net>
The NPC SPR is really only supposed to be used for FPGA debugging.
It contains the same contents as PC, unless one plays games. Follow
the or1ksim implementation in flushing delayed branch state when it
is changed.
The PPC SPR need not be updated every instruction, merely when we
exit the TB or attempt to read its contents.
Signed-off-by: Richard Henderson <rth@twiddle.net>
This allows the tcg optimizer to see, and fold, all of the
constants involved in a GOT base register load sequence.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Note that the specification for lf.madd.s is confused. It's
the only mention of supposed FPMADDHI/FPMADDLO special registers.
On the other hand, or1ksim implements a somewhat normal non-fused
multiply and add. Mirror that.
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Significantly simplifies the implementation of the use of MAC.
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Not documented as disabled for user mode.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This avoids having to keep merging and extracting the flag from SR.
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Decoding the opcodes in the right order reduces by 100+ lines.
Also, it happens to put the opcodes in the same order as Chapter 17.
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Fix incorrect overflow calculation. Move overflow exception check
to a helper function, to eliminate inline branches. Remove some
incorrect special casing of R0. Implement multiply inline.
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The architecture manual is consistent in using "I" for signed
fields and "K" for unsigned fields. Mirror that.
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Avoids warnings from unused variables etc.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
I am working on testing instruction emulation patches for the linux
kernel. During testing I found these 2 issues:
- sets DSX (delay slot exception) but never clears it
- EEAR for illegal insns should point to the bad exception (as per
openrisc spec) but its not
This patch fixes these two issues by clearing the DSX flag when not in a
delay slot and by setting EEAR to exception PC when handling illegal
instruction exceptions.
After this patch the openrisc kernel with latest patches boots great on
qemu and instruction emulation works.
Cc: qemu-trivial@nongnu.org
Cc: openrisc@lists.librecores.org
Signed-off-by: Stafford Horne <shorne@gmail.com>
Message-Id: <20170113220028.29687-1-shorne@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The member VMStateField.start is used for two things, partial data
migration for VBUFFER data (basically provide migration for a
sub-buffer) and for locating next in QTAILQ.
The implementation of the VBUFFER feature is broken when VMSTATE_ALLOC
is used. This however goes unnoticed because actually partial migration
for VBUFFER is not used at all.
Let's consolidate the usage of VMStateField.start by removing support
for partial migration for VBUFFER.
Signed-off-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Message-Id: <20170203175217.45562-1-pasic@linux.vnet.ibm.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
This patch contains several fixes to enable vPMU under TCG mode. It
first removes the checking of kvm_enabled() while unsetting
ARM_FEATURE_PMU. With it, the .pmu option can be used to turn on/off vPMU
under TCG mode. Secondly the PMU node of DT table is now created under TCG.
The last fix is to disable the masking of PMUver field of ID_AA64DFR0_EL1.
Signed-off-by: Wei Huang <wei@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1486504171-26807-5-git-send-email-wei@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This patch adds access support for PMINTENSET_EL1.
Signed-off-by: Wei Huang <wei@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1486504171-26807-4-git-send-email-wei@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In order to support Linux perf, which uses PMXEVTYPER register,
this patch adds read/write access support for PMXEVTYPER. The access
is CONSTRAINED UNPREDICTABLE when PMSELR is not 0x1f. Additionally
this patch adds support for PMXEVTYPER_EL0.
Signed-off-by: Wei Huang <wei@redhat.com>
Message-id: 1486504171-26807-3-git-send-email-wei@redhat.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This patch adds support for AArch64 register PMSELR_EL0. The existing
PMSELR definition is revised accordingly.
Signed-off-by: Wei Huang <wei@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: Moved #ifndef CONFIG_USER_ONLY to cover new regdefs]
Message-id: 1486504171-26807-2-git-send-email-wei@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add support for generating the ISS (Instruction Specific Syndrome)
for Data Abort exceptions taken from AArch32. These syndromes are
used by hypervisors for example to trap and emulate memory accesses.
This is the equivalent for AArch32 guests of the work done for AArch64
guests in commit aaa1f954d4.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
In the ARM ldr/str decode path, rather than directly testing
"insn & (1 << 21)" and "insn & (1 << 24)", abstract these
bits out into wbit and pbit local flags. (We will want to
do more tests against them to determine whether we need to
provide syndrome information.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
In BE32 mode, sub-word size watchpoints can fail to trigger because the
address of the access is adjusted in the opcode helpers before being
compared with the watchpoint registers. This patch reverses the address
adjustment before performing the comparison with the help of a new CPUClass
hook.
This version of the patch augments and tidies up comments a little.
Signed-off-by: Julian Brown <julian@codesourcery.com>
Message-id: caaf64ffc72f6ae183015337b7afdbd4b8989cb6.1484929304.git.julian@codesourcery.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Thumb-1 code has some issues in BE32 mode (as currently implemented). In
short, since bytes are swapped within words at load time for BE32
executables, this also swaps pairs of adjacent Thumb-1 instructions.
This patch un-swaps those pairs of instructions again, both for execution,
and for disassembly. (The previous version of the patch always read four
bytes in arm_read_memory_func and then extracted the proper two bytes,
in a probably misguided attempt to match the behaviour of actual hardware
as described by e.g. the ARM9TDMI TRM, section 3.3 "Endian effects for
instruction fetches". It's less complicated to just read the correct
two bytes though.)
Signed-off-by: Julian Brown <julian@codesourcery.com>
Message-id: ca20462a044848000370318a8bd41dd0a4ed273f.1484929304.git.julian@codesourcery.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add a new "cfgend" property which selects whether the CPU resets into
big-endian mode or not. This setting affects whether we reset with
SCTLR_B (ARMv6 and earlier) or SCTLR_EE (ARMv7 and later) set.
Signed-off-by: Julian Brown <julian@codesourcery.com>
Message-id: 11420d1c49636c1790e60578ee996e51f0f0b835.1484929304.git.julian@codesourcery.com
[PMM: use error_report_err() rather than error_report();
move the integratorcp changes to their own patch;
drop an unnecessary extra #include;
rephrase commit message accordingly;
move setting of reset_sctlr above registration of cpregs
so it actually has an effect]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This obsoletes ppc-for-2.9-20170112, which had a MacOS build bug.
This is a long overdue ppc pull request for qemu-2.9. It's been a
long time coming due to some holidays and inconveniently timed
problems with testing. So, there's a lot in here:
* More POWER9 instruction implementations for TCG
* The simpler parts of my CPU compatibility mode cleanup
* This changes behaviour to prefer compatibility modes over
"raW" mode for new machine type versions
* New "40p" machine type which is essentially a modernized and
cleaned up "prep". The intention is that it will replace "prep"
once it has some more testing and polish.
* Add pseries-2.9 machine type
* Implement H_SIGNAL_SYS_RESET hypercall
* Consolidate the two alternate CPU init paths in pseries by
making it always go through CPU core objects to initialize CPU
* A number of bugfixes and cleanups
* Stop the guest timebase when the guest is stopped under KVM.
This makes the guest system clock also stop when paused, which
matches the x86 behaviour.
* Some preliminary cleanups leading towards implementation of the
POWER9 MMU.
There are also some changes not strictly related to ppc code, but for
its benefit:
* Limit the pxi-expander-bridge (PXB) device to x86 guests only
(it's essentially a hack to work around historical x86
limitations)
* Some additions to the 128-bit math in host_utils, necessary for
some of the new instructions.
* Revise a number of qtests and enable them for ppc
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABCAAGBQJYko4AAAoJEGw4ysog2bOStEYQAIk0Pd6ifZzJUcTWQaR8+AZ7
nTbzQyWtSHqSAiwBNsykJMFXV1liZVglf2e+VBsrVOwKoU50VOyVm5LspG2z1h8N
Rxe4FGA2MA//2F3+9/AP8Oe3RdsClNCDaXAVuCFRP4xQWxqqwwasChDeS4Ph/cZq
CXnlhKTpk9v5vSCsr64bUOSYh3RPumnQepiBgT82hOo7R+VaJ79AFbTeCYKkd0hY
Sq8g3mg0zOX1ekNXPk1h8oZWqkoZGbqKiXgoy/evGXWURVzTSJO6VTyM65tdwWB7
Zds77gYAYCIYKq+Iwv4iBCmo4KJofjKQcQepQUr+eGDv9syXebtp6fY0btnIS+DX
uGzzaixZNms9r2+FAiIlKwIeQgQvl76lYEGmvBrbrgSOyA/7GAkOId0E0Ul6D5LW
EJSwk9ZDbyE0JBEq6Bx+LClpwye+bpdScU26djQTTcWpFApIeJTyG9V6b1xwulVZ
rw68ZvfMYxktkvhTbEtvk2O9YZI5eQStBJkmJXeOiOduiP93aiC82MM1Jp+82Q1E
4qRVvCpGTwzF3GLFciUKAqmwfYxByo4G0/dwG8qw6WNEemLyXFHV5TkzLhgwl3kC
gDGl5AdH4MXj8NRjuHcDiGXfePBCD578dmz4xo5ZLA2yBavxkRzM8QsEUmD8hf5w
jhLgyKt0G2hNNtOnGOdG
=vLVl
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-2.9-20170202' into staging
ppc patch queue 2017-02-02
This obsoletes ppc-for-2.9-20170112, which had a MacOS build bug.
This is a long overdue ppc pull request for qemu-2.9. It's been a
long time coming due to some holidays and inconveniently timed
problems with testing. So, there's a lot in here:
* More POWER9 instruction implementations for TCG
* The simpler parts of my CPU compatibility mode cleanup
* This changes behaviour to prefer compatibility modes over
"raW" mode for new machine type versions
* New "40p" machine type which is essentially a modernized and
cleaned up "prep". The intention is that it will replace "prep"
once it has some more testing and polish.
* Add pseries-2.9 machine type
* Implement H_SIGNAL_SYS_RESET hypercall
* Consolidate the two alternate CPU init paths in pseries by
making it always go through CPU core objects to initialize CPU
* A number of bugfixes and cleanups
* Stop the guest timebase when the guest is stopped under KVM.
This makes the guest system clock also stop when paused, which
matches the x86 behaviour.
* Some preliminary cleanups leading towards implementation of the
POWER9 MMU.
There are also some changes not strictly related to ppc code, but for
its benefit:
* Limit the pxi-expander-bridge (PXB) device to x86 guests only
(it's essentially a hack to work around historical x86
limitations)
* Some additions to the 128-bit math in host_utils, necessary for
some of the new instructions.
* Revise a number of qtests and enable them for ppc
# gpg: Signature made Thu 02 Feb 2017 01:40:16 GMT
# gpg: using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-2.9-20170202: (107 commits)
hw/ppc/pnv: Use error_report instead of hw_error if a ROM file can't be found
ppc/kvm: Handle the "family" CPU via alias instead of registering new types
target/ppc/mmu_hash64: Fix incorrect shift value in amr calculation
target/ppc/mmu_hash64: Fix printing unsigned as signed int
tcg/POWER9: NOOP the cp_abort instruction
target/ppc/debug: Print LPCR register value if register exists
target-ppc: Add xststdc[sp, dp, qp] instructions
target-ppc: Add xvtstdc[sp,dp] instructions
target-ppc: Add MMU model check for booke machines
ppc: switch to constants within BUILD_BUG_ON
target/ppc/cpu-models: Fix/remove bad CPU aliases
target/ppc: Remove unused POWERPC_FAMILY(POWER)
spapr: clock should count only if vm is running
ppc: Remove unused function cpu_ppc601_rtc_init()
target/ppc: Add pcr_supported to POWER9 cpu class definition
powerpc/cpu-models: rename ISAv3.00 logical PVR definition
target-ppc: Add xvcv[hpsp, sphp] instructions
target-ppc: Add xsmulqp instruction
target-ppc: Add xsdivqp instruction
target-ppc: Add xscvsdqp and xscvudqp instructions
...
# Conflicts:
# hw/pci-bridge/Makefile.objs
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When running with KVM on POWER, we are registering a "family" CPU
type for the host CPU that we are running on. For example, on all
POWER8-compatible hosts, we register a "POWER8" CPU type, so that
you can always start QEMU with "-cpu POWER8" there, without the
need to know whether you are running on a POWER8, POWER8E or POWER8NVL
host machine.
However, we also have a "POWER8" CPU alias in the ppc_cpu_aliases list
(that is mainly useful for TCG). This leads to two cosmetical drawbacks:
If the user runs QEMU with "-cpu ?", we always claim that POWER8 is an
"alias for POWER8_v2.0" - which is simply not true when running with
KVM on POWER. And when using the 'query-cpu-definitions' QMP call,
there are currently two entries for "POWER8", one for the alias, and
one for the additional registered type.
To solve the two problems, we should rather update the "family" alias
instead of registering a new types. We then only have one "POWER8"
CPU definition around, an alias, which also points to the right
destination.
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1396536
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We are calculating the authority mask register key value wrong.
The pte entry contains the key value with the two upper bits and the three
lower bits stored separately. We should use these two portions to get a 5
bit value, not or them together which will only give us a 3 bit value.
Fix this.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We were printing an unsigned value as a signed value, fix this.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The cp_abort instruction is used to remove the state of an in progress
copy paste sequence. POWER9 compilers add this in various places, such
as context switches which causes illegal instruction signals since we
don't yet implement this instruction.
Given there is no implementation of the copy paste facility and that we
don't claim to support it, we can just noop this instruction.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
It can be useful when debugging to print the LPCR value.
Thus we add the LPCR to the "info registers" output if the register had
been defined.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xststdcsp: VSX Scalar Test Data Class Single-Precision
xststdcdp: VSX Scalar Test Data Class Double-Precision
xststdcqp: VSX Scalar Test Data Class Quad-Precision
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xvtstdcsp: VSX Vector Test Data Class Single-Precision
xvtstdcdp: VSX Vector Test Data Class Double-Precision
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Macro calls without a trailing ; look weird in C, this works as a side
effect of how QEMU_BUILD_BUG_ON is implemented. Fix this up.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
stub version of MISMATCH_CHECK is empty so it's easy to misuse for
people not building kvm on arm. Use QEMU_BUILD_BUG_ON similar to the
non-stub version to make it easier to catch bugs.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
There is no CPU model called "7447_v1.2" in our list, so the
"7447" alias should point to "7447_v1.1" instead. Let's also
remove the "codename" aliases that point to non-implemented
CPU models - they are really of no use here.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We do not support POWER1 CPUs in QEMU, so it does not make sense
to keep this stub around.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This is a port to ppc of the i386 commit:
00f4d64 kvmclock: clock should count only if vm is running
We remove timebase_post_load function, and use the VM state
change handler to save and restore the guest_timebase (on stop
and continue).
We keep timebase_pre_save to reduce the clock difference on
migration like in:
6053a86 kvmclock: reduce kvmclock difference on migration
Time base offset has originally been introduced by commit
98a8b52 spapr: Add support for time base offset migration
So while VM is paused, the time is stopped. This allows to have
the same result with date (based on Time Base Register) and
hwclock (based on "get-time-of-day" RTAS call).
Moreover in TCG mode, the Time Base is always paused, so this
patch also adjust the behavior between TCG and KVM.
VM state field "time_of_the_day_ns" is now useless but we keep
it to be able to migrate to older version of the machine.
As vmstate_ppc_timebase structure (with timebase_pre_save() and
timebase_post_load() functions) was only used by vmstate_spapr,
we register the VM state change handler only in ppc_spapr_init().
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
pcr_supported is used to define the supported PCR values for a given
processor. A POWER9 processor can support 3.00, 2.07, 2.06 and 2.05
compatibility modes, thus we set this accordingly.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This logical PVR value now corresponds to ISA version 3.00 so rename it
accordingly.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xvcvhpsp: VSX Vector Convert Half Precision to Single Precision
xvcvsphp: VSX Vector Convert Single Precision to Half Precision
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xscvsdqp: VSX Scalar Convert Signed Doubleword format to
Quad-Precision format
xscvudqp: VSX Scalar Convert Unsigned Doubleword format to
Quad-Precision format
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xscmpoqp, xscmpuqp & xscmpexpqp were added before f128 field was
introduced in ppc_vsr_t. Now that we have it, use it instead of
generating the 128 bit float using two 64bit fields.
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
bcdutrunc. Decimal unsigned truncate. Works like bcdtrunc. with
unsigned BCD numbers.
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
bcdtrunc.: Decimal integer truncate. Given a BCD number in vrb and the
number of bytes to truncate in vra, the return register will have vrb
with such bits truncated.
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xscvqpsdz: VSX Scalar truncate & Convert Quad-Precision format to
Signed Doubleword format
xscvqpswz: VSX Scalar truncate & Convert Quad-Precision format to
Signed Word format
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
bcdsr.: Decimal shift and round. This instruction works like bcds.
however, when performing right shift, 1 will be added to the
result if the last digit was >= 5.
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
bcdus.: Decimal unsigned shift. This instruction works like bcds. but
considers only unsigned BCDs (no sign in least meaning 4 bits).
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
bcds.: Decimal shift. Given two registers vra and vrb, this instruction
shift the vrb value by vra bits into the result register.
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This commit fixes a warning in the code "(i * 2) ? .. : ..", which
should be better as "i ? .. : ..", and improves the BCD_DIG_BYTE
macro by placing parentheses around its argument to avoid possible
expansion issues like: BCD_DIG_BYTE(i + j).
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xscvqpdp: VSX Scalar round & Convert Quad-Precision format to
Double-Precision format
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xscvdpqp: VSX Scalar Convert Double-Precision format to
Quad-Precision format
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Once a compatiblity mode is negotiated with the guest,
h_client_architecture_support() uses run_on_cpu() to update each CPU to
the new mode. We're going to want this logic somewhere else shortly,
so make a helper function to do this global update.
We put it in target-ppc/compat.c - it makes as much sense at the CPU level
as it does at the machine level. We also move the cpu_synchronize_state()
into ppc_set_compat(), since it doesn't really make any sense to call that
without synchronizing state.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Use correct FP precision when setting FPRF in FP conversion helpers
instead of always assuming float64 precision.
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xscvdphp: VSX Scalar round & Convert Double-Precision format to
Half-Precision format
xscvhpdp: VSX Scalar Convert Half-Precision format to
Double-Precision format
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Since helper_compute_fprf() works on float64 argument, rename it
to helper_compute_fprf_float64(). Also use a macro to generate
helper_compute_fprf_float64() so that float128 version of the same
helper can be introduced easily later.
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Replace isden() by float64_is_zero_or_denormal() so that code in
helper_compute_fprf() can be reused to work with float128 argument.
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Use float64 argument instead of unit64_t in helper_compute_fprf()
This allows code in helper_compute_fprf() to be reused later to
work with float128 argument too.
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xxinsertw: VSX Vector Insert Word
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
xxextractuw: VSX Vector Extract Unsigned Word
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Current ppc_set_compat() will attempt to set any compatiblity mode
specified, regardless of whether it's available on the CPU. The caller is
expected to make sure it is setting a possible mode, which is awkwward
because most of the information to make that decision is at the CPU level.
This begins to clean this up by introducing a ppc_check_compat() function
which will determine if a given compatiblity mode is supported on a CPU
(and also whether it lies within specified minimum and maximum compat
levels, which will be useful later). It also contains an assertion that
the CPU has a "virtual hypervisor"[1], that is, that the guest isn't
permitted to execute hypervisor privilege code. Without that, the guest
would own the PCR and so could override any mode set here. Only machine
types which use a virtual hypervisor (i.e. 'pseries') should use
ppc_check_compat().
ppc_set_compat() is modified to validate the compatibility mode it is given
and fail if it's not available on this CPU.
[1] Or user-only mode, which also obviously doesn't allow access to the
hypervisor privileged PCR. We don't use that now, but could in future.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
To continue consolidation of compatibility mode information, this rewrites
the ppc_get_compat_smt_threads() function using the table of compatiblity
modes in target-ppc/compat.c.
It's not a direct replacement, the new ppc_compat_max_threads() function
has simpler semantics - it just returns the number of threads the cpu
model has, taking into account any compatiblity mode it is in.
This no longer takes into account kvmppc_smt_threads() as the previous
version did. That check wasn't useful because we check in
ppc_cpu_realizefn() that CPUs aren't instantiated with more threads
than kvm allows (or if we didn't things will already be broken and
this won't make it any worse).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
This rewrites the ppc_set_compat() function so that instead of open coding
the various compatibility modes, it reads the relevant data from a table.
This is a first step in consolidating the information on compatibility
modes scattered across the code into a single place.
It also makes one change to the logic. The old code masked the bits
to be set in the PCR (Processor Compatibility Register) by which bits
are valid on the host CPU. This made no sense, since it was done
regardless of whether our guest CPU was the same as the host CPU or
not. Furthermore, the actual PCR bits are only relevant for TCG[1] -
KVM instead uses the compatibility mode we tell it in
kvmppc_set_compat(). When using TCG host cpu information usually
isn't even present.
While we're at it, we put the new implementation in a new file to make the
enormous translate_init.c a little smaller.
[1] Actually it doesn't even do anything in TCG, but it will if / when we
get to implementing compatibility mode logic at that level.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
stxvll: Store VSX Vector Left-justified with Length
Vector (8-bit elements) in BE/LE:
+---+---+---+---+---+---+---+---+---+---+---+---+---+---+--+--+
|“T”|“h”|“i”|“s”|“ ”|“i”|“s”|“ ”|“a”|“ ”|“T”|“E”|“S”|“T”|00|00|
+---+---+---+---+---+---+---+---+---+---+---+---+---+---+--+--+
Storing 14 bytes would result in following Little/Big-endian Storage:
+---+---+---+---+---+---+---+---+---+---+---+---+---+---+--+--+
|“T”|“h”|“i”|“s”|“ ”|“i”|“s”|“ ”|“a”|“ ”|“T”|“E”|“S”|“T”|FF|FF|
+---+---+---+---+---+---+---+---+---+---+---+---+---+---+--+--+
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
A function to check if all digits of a given BCD number is valid is
here presented because more instructions will need to reuse the
same code.
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The structure and corresponding defines and functions need to be used
outside of fpu_helper.c as well.
Add u8, u16, u32 and Int128 to the structure.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The 'cpu_version' field in PowerPCCPU is badly named. It's named after the
'cpu-version' device tree property where it is advertised, but that meaning
may not be obvious in most places it appears.
Worse, it doesn't even really correspond to that device tree property. The
property contains either the processor's PVR, or, if the CPU is running in
a compatibility mode, a special "logical PVR" representing which mode.
Rename the cpu_version field, and a number of related variables to
compat_pvr to make this clearer.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Thomas Huth <thuth@redhat.com>
The pseries machine type is a bit unusual in that it runs a paravirtualized
guest. The guest expects to interact with a hypervisor, and qemu
emulates the functions of that hypervisor directly, rather than executing
hypervisor code within the emulated system.
To implement this in TCG, we need to intercept hypercall instructions and
direct them to the machine's hypercall handlers, rather than attempting to
perform a privilege change within TCG. This is controlled by a global
hook - cpu_ppc_hypercall.
This cleanup makes the handling a little cleaner and more extensible than
a single global variable. Instead, each CPU to have hypercalls intercepted
has a pointer set to a QOM object implementing a new virtual hypervisor
interface. A method in that interface is called by TCG when it sees a
hypercall instruction. It's possible we may want to add other methods in
future.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
bcdsetsgn.: Decimal set sign. This instruction copies the register
value to the result register but adjust the signal according to
the preferred sign value.
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
bcdcpsgn.: Decimal copy sign. Given two registers vra and vrb, it
copies the vra value with vrb sign to the result register vrt.
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
bcdctsq.: Decimal convert to signed quadword. It is possible to
convert packed decimal values to signed quadwords.
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
bcdcfsq.: Decimal convert from signed quadword. It is not possible
to convert values less than -10^31-1 or greater than 10^31-1 to be
represented in packed decimal format.
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
[dwg: Corrected constant which should be 10^16-1 but was 10^17-1]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
stxsd: Store VSX Scalar Dword
stxssp: Store VSX Scalar SP
Moreover, DQ-Form/DS-FORM instructions shares the same primary
opcode(0x3D). For DQ-FORM bits 29:31 are used, for DS-FORM bits 30:31
are used. Common routine to decode primary opcode(0x3D) -
ds-form/dq-form instructions is required.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
lxsd: Load VSX Scalar Dword
lxssp: Load VSX Scalar Single
Moreover, DS-Form instructions shares the same primary opcode, bits
30:31 are used to decode the instruction. Use a common routine to decode
primary opcode(0x39) - ds-form instructions and branch-out depending on
bits 30:31.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
- xscmpodp & xscmpudp are missing flags reset.
- In xscmpodp, VXCC should be set only if VE is 0 for signalling NaN case
and VXCC should be set by explicitly checking for quiet NaN case.
- Comparison is being done only if the operands are not NaNs. However as
per ISA, it should be done even when operands are NaNs.
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Add _BIT to CRF_[GT,LT,EQ_SO] and introduce CRF_[GT,LT,EQ,SO] for usage
without shifts in the code. This would simplify the code.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Move instruction decode helpers to target-ppc/internal.h so that some
of these can be used from outside of translate.c. This movement also
helps to get rid of some duplicate helpers from target-ppc/fpu_helper.c.
Suggested-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Phil Dennis-Jordan <phil@philjordan.eu>
Message-Id: <1484921496-11257-4-git-send-email-phil@philjordan.eu>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This fixes timekeeping of x86-64 Darwin/OS X/macOS guests when using KVM.
Darwin/OS X/macOS for x86-64 uses the TSC for timekeeping; it normally calibrates this by querying various clock frequency scaling MSRs. Details depend on the exact CPU model detected. The local APIC timer frequency is extracted from (EFI) firmware.
This is problematic in the presence of virtualisation, as the MSRs in question are typically not handled by the hypervisor. VMWare (Fusion) advertises TSC and APIC frequency via a custom 0x40000010 CPUID leaf, in the eax and ebx registers respectively. This is documented at https://lwn.net/Articles/301888/ among other places.
Darwin/OS X/macOS looks for the generic 0x40000000 hypervisor leaf, and if this indicates via eax that leaf 0x40000010 might be available, that is in turn queried for the two frequencies.
This adds a CPU option "vmware-cpuid-freq" to enable the same behaviour when running Qemu with KVM acceleration, if the KVM TSC frequency can be determined, and it is stable. (invtsc or user-specified) The virtualised APIC bus cycle is hardcoded to 1GHz in KVM, so ebx of the CPUID leaf is also hardcoded to this value.
Signed-off-by: Phil Dennis-Jordan <phil@philjordan.eu>
Message-Id: <1484921496-11257-2-git-send-email-phil@philjordan.eu>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch improves interrupt handling in record/replay mode.
Now "interrupt" event is saved only when cc->cpu_exec_interrupt returns true.
This patch also adds missing return to cpu_exec_interrupt function.
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Message-Id: <20170124071708.4572.64023.stgit@PASHA-ISP>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
For M profile (unlike A profile) the reset value of R14 is specified
as 0xffffffff. (The rationale is that this is an illegal exception
return value, so if guest code tries to return to it it will result
in a helpful exception.)
Registers r0 to r12 and the flags are architecturally UNKNOWN on
reset, so we leave those at zero.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1485285380-10565-11-git-send-email-peter.maydell@linaro.org
For M profile CPUs, FAULTMASK should be 0 on reset, like PRIMASK.
QEMU stores FAULTMASK in the PSTATE F bit, so (as with PRIMASK in the
I bit) we have to clear these to undo the A profile default of 1.
Update the comment accordingly and move it so that it's closer to the
code it's referring to.
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1485285380-10565-10-git-send-email-peter.maydell@linaro.org
[PMM: rewrote commit message, moved comments]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For v7M attempts to access a nonexistent coprocessor are reported
differently from plain undefined instructions (as UsageFaults of type
NOCP rather than type UNDEFINSTR). Split them out into a new
EXCP_NOCP so we can report the FSR value correctly.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1485285380-10565-8-git-send-email-peter.maydell@linaro.org
When we take an exception for an undefined instruction, set the
appropriate CFSR bit.
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1485285380-10565-7-git-send-email-peter.maydell@linaro.org
[PMM: tweaked commit message, comment]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The CCR.STACKALIGN bit controls whether the CPU is supposed to force
8-alignment of the stack pointer on entry to the exception handler.
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
Message-id: 1485285380-10565-6-git-send-email-peter.maydell@linaro.org
[PMM: commit message and comment tweaks]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add the structure fields, VMState fields, reset code and macros for
the v7M system control registers CCR, CFSR, HFSR, DFSR, MMFAR and
BFAR.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1485285380-10565-4-git-send-email-peter.maydell@linaro.org
We only use the IS_M() macro in two places, and it's a bit of a
namespace grab to put in cpu.h. Drop it in favour of just explicitly
calling arm_feature() in the places where it was used.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1485285380-10565-2-git-send-email-peter.maydell@linaro.org
FAULTMASK must be cleared on return from all
exceptions other than NMI.
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1484937883-1068-7-git-send-email-peter.maydell@linaro.org
The v7m CONTROL register bit 1 is SPSEL, which indicates
the stack being used. We were storing this information
not in v7m.control but in the separate v7m.other_sp
structure field. Unfortunately, the code handling reads
of the CONTROL register didn't take account of this, and
so if SPSEL was updated by an exception entry or exit then
a subsequent guest read of CONTROL would get the wrong value.
Using a separate structure field doesn't really gain us
anything in efficiency, so drop this unnecessary complexity
in favour of simply storing all the bits in v7m.control.
This is a migration compatibility break for M profile
CPUs only.
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1484937883-1068-6-git-send-email-peter.maydell@linaro.org
[PMM: rewrote commit message;
use deposit32(); use FIELD to define constants for
masking and shifting of CONTROL register fields
]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Give an explicit error and abort when a load
from the vector table fails. Architecturally this
should HardFault (which will then immediately
fail to load the HardFault vector and go into Lockup).
Since we don't model Lockup, just report this guest
error via cpu_abort(). This is more helpful than the
previous behaviour of reading a zero, which is the
address of the reset stack pointer and not a sensible
location to jump to.
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1484937883-1068-4-git-send-email-peter.maydell@linaro.org
[PMM: expanded commit message]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For v7m we need to catch attempts to execute from special
addresses at 0xfffffff0 and above. Previously we did this
with the aid of a hacky special purpose lump of memory
in the address space and a check in translate.c for whether
we were translating code at those addresses.
We can implement this more cleanly using a CPU
unassigned access handler which throws the exception
if the unassigned access is for one of the special addresses.
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1484937883-1068-3-git-send-email-peter.maydell@linaro.org
[PMM:
* drop the deletion of the "don't interrupt if PC is magic"
code in arm_v7m_cpu_exec_interrupt() -- this is still
required
* don't generate an exception for unassigned accesses
which aren't to the magic address -- although doing
this is in theory correct in practice it will break
currently working guests which rely on the RAZ/WI
behaviour when they touch devices which we haven't
modelled.
* trigger EXCP_EXCEPTION_EXIT on is_exec, not !is_write
]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The MRS and MSR instruction handling has a number of flaws:
* unprivileged accesses should only be able to read
CONTROL and the xPSR subfields, and only write APSR
(others RAZ/WI)
* privileged access should not be able to write xPSR
subfields other than APSR
* accesses to unimplemented registers should log as
guest errors, not abort QEMU
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1484937883-1068-2-git-send-email-peter.maydell@linaro.org
[PMM: rewrote commit message]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add support for emulating Altera NiosII R1 architecture into qemu.
This patch is based on previous work by Chris Wulff from 2012 and
updated to latest mainline QEMU.
Signed-off-by: Marek Vasut <marex@denx.de>
Cc: Chris Wulff <crwulff@gmail.com>
Cc: Jeff Da Silva <jdasilva@altera.com>
Cc: Ley Foon Tan <lftan@altera.com>
Cc: Sandra Loosemore <sandra@codesourcery.com>
Cc: Yves Vandervennet <yvanderv@altera.com>
Cc: Alexander Graf <agraf@suse.de>
Message-Id: <20170118220146.489-3-marex@denx.de>
[rth: Remove tlb_flush from nios2_cpu_reset.]
Signed-off-by: Richard Henderson <rth@twiddle.net>
1 My maintainer change
2 Jianjun's qtailq
3 Ashijeet's only-migratable
4 Zhanghailiang's re-active images
5 Pankaj's change name of migration thread
6 My PCI migration merge
7 Juan's debug to tracing
8 My tracing on save
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJYh59nAAoJEAUWMx68W/3nC90P/2sNW0bjFBF7mACpgylOyakM
LwV6zYemzSPFl2MWv5MlJYOiKm6JA8azv3jbm2um518wBqVnMtJdIacBE0FGzPOu
ZxaRdawEQbNbOB8K3nLSVhrdSvSHenSbGP6LeKb1audHT/vkFCBQt4BH30eGG8Tn
/ugU/UpxKOJX65kTqyVc+8GCVbSfKXsp21fQgYvj7kjOwQ56/IuvnsQ8UTZrxeqZ
My27vFv4iNdonxj6YR+GDBs5Cei2vKi9lVdXu7VzruJDMmtn3Qyros6KeDDJS6bu
CmI4yn557zn+SfqJkckltBg27343JyYPDxP40mNvsSUY1JCWJsP47o+UnHPW3kFc
uinuHGmz/NnB7OUS9GkYDco4eU453pvaUFSVb6STdzIHtGByZwZVOcgeP+RNFkUx
pF8TxALaS6MfEiWO9Rn+vZEyEGBZAWaH3QXk1zd5ylXyb5W6gifHMsMYiln14FpA
0PWrItDjt45P/1C/lS+MLsVGOQEn5xMTM2zegQOYOIQHuuW0M6xhwX5LPJPM7+XI
6wG1iy1lxoNx5/Afm+NmzTLYSo8dEUeLq6jjUsoiU5m8tdg1QVpq2A37IWKMA7aH
2ZvN5IX/oPDPDxt0fXzzSjRsixGfqfYHf1ghEBX0QDyVdgyTX5+MX3NKMTRVXtBX
xTQ43oKdxYdTGmOwnhVc
=jetb
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgilbert/tags/pull-migration-20170124b' into staging
Migration
1 My maintainer change
2 Jianjun's qtailq
3 Ashijeet's only-migratable
4 Zhanghailiang's re-active images
5 Pankaj's change name of migration thread
6 My PCI migration merge
7 Juan's debug to tracing
8 My tracing on save
# gpg: Signature made Tue 24 Jan 2017 18:39:35 GMT
# gpg: using RSA key 0x0516331EBC5BFDE7
# gpg: Good signature from "Dr. David Alan Gilbert (RH2) <dgilbert@redhat.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 45F5 C71B 4A0C B7FB 977A 9FA9 0516 331E BC5B FDE7
* remotes/dgilbert/tags/pull-migration-20170124b:
migration/tracing: Add tracing on save
migration: transform remaining DPRINTF into trace_
PCI/migration merge vmstate_pci_device and vmstate_pcie_device
migration: Change name of live migration thread
migration: re-active images while migration been canceled after inactive them
migration: Fail migration blocker for --only-migratable
migration: disallow migrate_add_blocker during migration
migration: Allow "device add" options to only add migratable devices
migration: Add a new option to enable only-migratable
block/vvfat: Remove the undesirable comment
migration: add error_report
tests/migration: Add test for QTAILQ migration
migration: migrate QTAILQ
migration: extend VMStateInfo
MAINTAINERS: Add myself as a migration submaintainer
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
If a migration is already in progress and somebody attempts
to add a migration blocker, this should rightly fail.
Add an errp parameter and a retcode return value to migrate_add_blocker.
Signed-off-by: John Snow <jsnow@redhat.com>
Signed-off-by: Ashijeet Acharya <ashijeetacharya@gmail.com>
Message-Id: <1484566314-3987-5-git-send-email-ashijeetacharya@gmail.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Acked-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Merged with recent 'Allow invtsc migration' change
Current migration code cannot handle some data structures such as
QTAILQ in qemu/queue.h. Here we extend the signatures of put/get
in VMStateInfo so that customized handling is supported. put now
will return int type.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Jianjun Duan <duanj@linux.vnet.ibm.com>
Message-Id: <1484852453-12728-2-git-send-email-duanj@linux.vnet.ibm.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
that might cause random guest crashes with zeroed out pages on host
kernels with working cmma (< 4.6 and likely >= 4.10).
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJYh2wiAAoJEN7Pa5PG8C+vYj4QAKfZWc6Pf42HQUfDVdgiK3cV
8N7Ew9VHCaXO7awf9wcAgjGAX7BRPbVMB/QEpta2KmtKftxUGsfVACOAM8cZmoqj
ItZ1bpR5/tbNMlCPEyoqkJvhyDKzL5fea0wucss224cDlV7n/AyZjei9QnzMirtZ
rEVDbnM/BmvpGiwSmrSzXOwFTY8hOd738bm0gIVnKW8GxslChYwVrpEtrgdqL7yG
dSRruE2h2VUC8yplre9smJk3sg5xUsIxWa4JgI3s84O++pEnB02Yi+OIqW+zG9xJ
ABObWMls5dbqap1T2VaF3fdt/yVUuZvOl8gB3Op5m6ULSyd3m+KJdbR4XvYKpDGe
ykJNcex+W8mlejFfo2jDLVYHK9e4PXfwtBGpogSzQj1d1+jLlAl1HhTd2v7NLQtL
hSDTUKlRG5XmtbQ6Fm4FBDC7tdO9CmGrhSeLSZ9fJM29Hn5PMc4AERBGWEMph/ek
j15sGUu6vqiBXAuwH17TpKrlQe8I03JqWMscvQ1mSLZSB4DwhqXl5zPqRdGATGuU
aN/0FptfqXcBOC8/3EfK6yJXlAuCqbBbbDqqn7kN2OcYVblAUmhUgU1DkDeCRZi8
d4d9IRBIKmVTBpO/7CPEwupLMyEGEQZcIqKpTPax7lCCgRCTAfpVRddVibKMvOa5
aDl0EyPupUUr7/ubw4AD
=Rmjk
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/cohuck/tags/s390x-20170124' into staging
Two s390x fixes: One for the kvm.c build failure, and one for a bug
that might cause random guest crashes with zeroed out pages on host
kernels with working cmma (< 4.6 and likely >= 4.10).
# gpg: Signature made Tue 24 Jan 2017 15:00:50 GMT
# gpg: using RSA key 0xDECF6B93C6F02FAF
# gpg: Good signature from "Cornelia Huck <huckc@linux.vnet.ibm.com>"
# gpg: aka "Cornelia Huck <cornelia.huck@de.ibm.com>"
# Primary key fingerprint: C3D0 D66D C362 4FF6 A8C0 18CE DECF 6B93 C6F0 2FAF
* remotes/cohuck/tags/s390x-20170124:
s390x/kvm: fix cmma reset for KVM
s390x/kvm: include hw_accel.h instead of kvm.h
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
-----BEGIN PGP SIGNATURE-----
iQIcBAABCAAGBQJYhpFDAAoJECgHk2+YTcWma8gP/3FI1oOSw4Z2emrrMJifi7i7
FfErjzKQX76wze8MbDIb0Biyij7/1qSz9Z4yD7+tURF49O6Qd4MpryFxgfuQx7Xo
ahZhf+PedhBYt62RaIpFYER30mqg0JzbyLFEH/Vk8FVxOrhn2zB+MUGU6zU2Z+dE
AAbryuruPL/sdxSDerTscnOJTxRZEs/2zIdf7aqhuqCe9P9w/lSq1mPWckBgtNJx
V36HXm5q0nvEyBJuu3ikYD8EyQrF4nAsa+Xe0E9H2XZtA/rQX9uBasqVRS/vgS9T
fsdHIWqJ7jS0NhkMgQJTNy/NjGEAm6xdqSHoRCqLv9qHTN1yD9C8r/N5+fOqf6jh
wDlTVns8Se9yMrxTWt0RN3Rf64Zc2X3O1iroe596vR71AQvyXxwxN0aNEo5uPUip
nCM6K5OJBk+vDOA93aZbKGHBWJp3gOJdAEUbgScXZvOGQa7MF+MaK8aZK6Ad9zV2
MYxAn8RYZP/UQIzIJWeVJMC0/F/ugmdrnURltf88gqeG17x4eBw0et6PenKf779X
vHBXok5uPMQbcV3T7HH79J4hMi2Qn8wV7G9j3CzMMT7qqYVITbq7R28FUOJIuUDm
64jL/IXRfFDsuV72PctDkmu7UPDSioJUgsGTGTjH/6G+eUvq8utu8hWEkYnGWE4W
thYPUaE50CCUCF6PiW6/
=abpK
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/ehabkost/tags/x86-and-machine-pull-request' into staging
x86, machine, numa queue (2017-01-23)
# gpg: Signature made Mon 23 Jan 2017 23:26:59 GMT
# gpg: using RSA key 0x2807936F984DC5A6
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>"
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6
* remotes/ehabkost/tags/x86-and-machine-pull-request:
kvm: Allow invtsc migration if tsc-khz is set explicitly
kvm: Simplify invtsc check
hw/core/null-machine: Add the possibility to instantiate a CPU and RAM
qemu-options: Rename variables on the -numa "cpus" option
MAINTAINERS: Add an entry for hw/core/null-machine.c
machine: Make possible_cpu_arch_ids() return const pointer
pc: don't return cpu pointer from pc_new_cpu() as it's not needed anymore
pc: cleanup: move smbios_set_cpuid() into pc_build_smbios()
arch_init: Remove unnecessary default_config_files table
vl: Ensure the numa_post_machine_init func in the appropriate location
i386: Return migration-safe field on query-cpu-definitions
i386: Remove AMD feature flag aliases from Opteron models
x86: add AVX512_VPOPCNTDQ features
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We must reset the CMMA states for normal memory (when not on mem path),
but the current code does the opposite. This was unnoticed for some time
as the kernel since 4.6 also had a bug which mostly disabled the paging
optimizations.
Fixes: 07059effd1 ("s390x/kvm: let the CPU model control CMM(A)")
Cc: qemu-stable@nongnu.org # v2.8
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Commit b394662 ("kvm: move cpu synchronization code") switched
to hw_accel.h instead of kvm.h, but missed s390x, resulting in
CC s390x-softmmu/target/s390x/kvm.o
/home/cohuck/git/qemu/target/s390x/kvm.c: In function ‘kvm_sclp_service_call’:
/home/cohuck/git/qemu/target/s390x/kvm.c:1034:5: error: implicit declaration of function ‘cpu_synchronize_state’ [-Werror=implicit-function-declaration]
cpu_synchronize_state(CPU(cpu));
^
/home/cohuck/git/qemu/target/s390x/kvm.c:1034:5: error: nested extern declaration of ‘cpu_synchronize_state’ [-Werror=nested-externs]
/home/cohuck/git/qemu/target/s390x/kvm.c: In function ‘sigp_initial_cpu_reset’:
/home/cohuck/git/qemu/target/s390x/kvm.c:1628:5: error: implicit declaration of function ‘cpu_synchronize_post_reset’ [-Werror=implicit-function-declaration]
cpu_synchronize_post_reset(cs);
^
/home/cohuck/git/qemu/target/s390x/kvm.c:1628:5: error: nested extern declaration of ‘cpu_synchronize_post_reset’ [-Werror=nested-externs]
/home/cohuck/git/qemu/target/s390x/kvm.c: In function ‘sigp_set_prefix’:
/home/cohuck/git/qemu/target/s390x/kvm.c:1665:5: error: implicit declaration of function ‘cpu_synchronize_post_init’ [-Werror=implicit-function-declaration]
cpu_synchronize_post_init(cs);
^
/home/cohuck/git/qemu/target/s390x/kvm.c:1665:5: error: nested extern declaration of ‘cpu_synchronize_post_init’ [-Werror=nested-externs]
cc1: all warnings being treated as errors
/home/cohuck/git/qemu/rules.mak:64: recipe for target 'target/s390x/kvm.o' failed
Fix this.
Fixes: b394662 ("kvm: move cpu synchronization code")
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Acked-by: Vincent Palatin <vpalatin@chromium.org>
We can safely allow a VM to be migrated with invtsc enabled if
tsc-khz is set explicitly, because:
* QEMU already refuses to start if it can't set the TSC frequency
to the configured value.
* Management software is already required to keep device
configuration (including CPU configuration) the same on
migration source and destination.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20170108173234.25721-3-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Instead of searching the table we have just built, we can check
the env->features field directly.
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20170108173234.25721-2-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Return the migration-safe field on query-cpu-definitions. All CPU
models in x86 are migration-safe except "host".
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20170116181212.31565-1-ehabkost@redhat.com>
Acked-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
When CPU vendor is set to AMD, the AMD feature alias bits on
CPUID[0x80000001].EDX are already automatically copied from CPUID[1].EDX
on x86_cpu_realizefn(). When CPU vendor is Intel, those bits are
reserved and should be zero. On either case, those bits shouldn't be set
in the CPU model table.
Commit 726a8ff686 removed those
bits from most CPU models, but the Opteron_* entries still have
them. Remove the alias bits from Opteron_* too.
Add an assert() to x86_register_cpudef_type() to ensure we don't
make the same mistake again.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20170113190057.6327-1-ehabkost@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
For linux, page 0 is mapped as an execute-only gateway. A gateway
page is a special bit in the page table that allows a B,GATE insn
within that page to raise processor permissions. This is how system
calls are implemented for HPPA.
Rather than actually map anything here, or handle permissions at all,
implement the semantics of the actual linux syscall entry points.
Signed-off-by: Richard Henderson <rth@twiddle.net>
The HPPA cpu has a unique form of predicated execution in which
almost any instruction can set the PSW[N] (or "nullify") bit,
which suppresses execution (and even decoding) of the following
instruction. Execution of a nullified insn clears the PSW[N] bit.
This adds a generic framework for branching over nullified insns,
or for sufficiently simple insns, transforming the writeback of
the result to a conditional move. In the process, we want to be
able to represent PSW[N] as a TCG condition, which implies management
of the related tcg temps.
Signed-off-by: Richard Henderson <rth@twiddle.net>
This is just about the minimum required to enable compilation
without actually executing any instructions. This contains the
HPPACPU structure and the required callbacks, the gdbstub, the
basic translation loop, and a translate_one function that always
results in an illegal instruction.
Signed-off-by: Richard Henderson <rth@twiddle.net>
- rework of the zpci code, giving us proper multibus support
- introduction of the 2.9 machine
- fixes and improvements
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJYgdReAAoJEN7Pa5PG8C+vDggP/i3eviyb2mFlnIiwazlAfBuw
Uc6vBFDh/WWMthpzHl4PF+yujM3XbuvUN3VejdnqWLQ1PYq2p3n7rHNlR2XlBovu
f8l2LpPZGsj1VtAr1QGBj5ipOmRs3qydXY7EDCKORbKuPeor1VW7TbeaKbfpvpZM
rZHWMlV1UGA6kxM/B+zd9+kxBM3IYnHy3o+Gaq+cfuKyc0VRWRJmalqonjkR7EZj
InaIyOtGonpPTlMD1GTbM71Wx/NnCugYUEX1Eq4yHX4DV15rM3B83LgTJu72txzr
ObJmzT3XU2DKwtzo87Y6cWJ3GoxQQbwgiU6VL+l8JVtrzGfllpUdcdInQjSqxXp2
OW8NuV6Ie02YOrczBXbBAv46PKmoLTf63hvsC4f6nNLa2O6FqxAXzYGKtOpvgOq5
j1Q6VyzAb/vbyyW2lyMice4XJXGMxitaMGxvJG0lq/iscRpNdpz6E+dgkzO7lieF
+ETpDsGd5miMdsAUqmIREjBCCjOzOGpC4WX0mg8Te8LmR3Rt8WYIgWuowMvbq2iG
/qmv9a8ea2XqB+/g2ta+YqS9cPChsPJSN03Q0bo1244DMwBKuVwyXNsC9lRIkiHJ
4b1Msoseohv9D4ghU8q6gSOU+T5nxLRT1TWBByqhkONU1C4UyKHEblop/c1oHE5k
UZtiaQvyWFhVU4QtXeE8
=fzmu
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/cohuck/tags/s390x-20170120-v2' into staging
First set of s390x patches for 2.9:
- rework of the zpci code, giving us proper multibus support
- introduction of the 2.9 machine
- fixes and improvements
# gpg: Signature made Fri 20 Jan 2017 09:11:58 GMT
# gpg: using RSA key 0xDECF6B93C6F02FAF
# gpg: Good signature from "Cornelia Huck <huckc@linux.vnet.ibm.com>"
# gpg: aka "Cornelia Huck <cornelia.huck@de.ibm.com>"
# Primary key fingerprint: C3D0 D66D C362 4FF6 A8C0 18CE DECF 6B93 C6F0 2FAF
* remotes/cohuck/tags/s390x-20170120-v2:
virtio-ccw: fix ring sizing
s390x/pci: merge msix init functions
s390x/pci: handle PCIBridge bus number
s390x/pci: use hashtable to look up zpci via fh
s390x/pci: PCI multibus bridge handling
s390x/pci: optimize calling s390_get_phb()
s390x/pci: change the device array to a list
s390x/pci: dynamically allocate iommu
s390x/pci: make S390PCIIOMMU inherit Object
s390x/kvm: use kvm_gsi_routing_enabled in flic
s390x: add compat machine for 2.9
s390x: remove double compat statement
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Enable the ARM_FEATURE_EL2 bit on Cortex-A52 and
Cortex-A57, since this is all now sufficiently implemented
to work with the GICv3. We provide the usual CPU property
to disable it for backwards compatibility with the older
virt boards.
In this commit, we disable the EL2 feature on the
virt and ZynpMP boards, so there is no overall effect.
Another commit will expose a board-level property to
allow the user to enable EL2.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Message-id: 1483977924-14522-18-git-send-email-peter.maydell@linaro.org
The PSCI spec states that a CPU_ON call should cause the new
CPU to be started in the highest implemented Non-secure
exception level. We were incorrectly starting it at the
exception level of the caller, which happens to be correct
if EL2 is not implemented. Implement the correct logic
as described in the PSCI 1.0 spec section 6.4:
* if EL2 exists and SCR_EL3.HCE is set: start in EL2
* otherwise start in EL1
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Tested-by: Andrew Jones <drjones@redhat.com>
Message-id: 1483977924-14522-17-git-send-email-peter.maydell@linaro.org
Add fields to the ARMCPU structure to allow CPU classes to
specify the configurable aspects of their GIC CPU interface.
In particular, the virtualization support allows different
values for number of list registers, priority bits and
preemption bits.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Alistair Francis <alistair.francis@xilinx.com>
Message-id: 1483977924-14522-6-git-send-email-peter.maydell@linaro.org
The GICv3 support for virtualization includes an outbound
maintenance interrupt signal which is asserted when the
CPU interface wants to signal to the hypervisor that it
needs attention. Expose this as an outbound GPIO line from
the CPU object which can be wired up as a physical interrupt
line by the board code (as we do already for the CPU timers).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Message-id: 1483977924-14522-4-git-send-email-peter.maydell@linaro.org
The DBGVCR_EL2 system register is needed to run a 32-bit
EL1 guest under a Linux EL2 64-bit hypervisor. Its only
purpose is to provide AArch64 with access to the state of
the DBGVCR AArch32 register. Since we only have a dummy
DBGVCR, implement a corresponding dummy DBGVCR32_EL2.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
To run a VM in 32-bit EL1 our AArch32 interrupt handling code
needs to be able to cope with VIRQ and VFIQ exceptions.
These behave like IRQ and FIQ except that we don't need to try
to route them to Monitor mode.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
A function may recursively call device search functions or may call
serveral different device search function. Passing the S390pciState to
search functions as an argument instead of looking up it inside the
search functions lowers the number of calling s390_get_phb().
Signed-off-by: Yi Min Zhao <zyimin@linux.vnet.ibm.com>
Reviewed-by: Pierre Morel <pmorel@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Re-add the MacOSX/Darwin support:
Use the Intel HAX is kernel-based hardware acceleration module
(similar to KVM on Linux).
Based on the original "target/i386: Add Intel HAX to android emulator" patch
from David Chou <david.j.chou@intel.com> from emu-2.2-release branch in
the external/qemu-android repository.
Signed-off-by: Vincent Palatin <vpalatin@chromium.org>
Message-Id: <81b85c3032da902e73e77302af508b4b1a7c0ead.1484045952.git.vpalatin@chromium.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use the Intel HAX is kernel-based hardware acceleration module for
Windows (similar to KVM on Linux).
Based on the "target/i386: Add Intel HAX to android emulator" patch
from David Chou <david.j.chou@intel.com>
Signed-off-by: Vincent Palatin <vpalatin@chromium.org>
Message-Id: <7b9cae28a0c379ab459c7a8545c9a39762bd394f.1484045952.git.vpalatin@chromium.org>
[Drop hax_populate_ram stub. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
That's a forward port of the core HAX interface code from the
emu-2.2-release branch in the external/qemu-android repository as used by
the Android emulator.
The original commit was "target/i386: Add Intel HAX to android emulator"
saying:
"""
Backport of 2b3098ff27bab079caab9b46b58546b5036f5c0c
from studio-1.4-dev into emu-master-dev
Intel HAX (harware acceleration) will enhance android emulator performance
in Windows and Mac OS X in the systems powered by Intel processors with
"Intel Hardware Accelerated Execution Manager" package installed when
user runs android emulator with Intel target.
Signed-off-by: David Chou <david.j.chou@intel.com>
"""
It has been modified to build and run along with the current code base.
The formatting has been fixed to go through scripts/checkpatch.pl,
and the DPRINTF macros have been updated to get the instanciations checked by
the compiler.
The FPU registers saving/restoring has been updated to match the current
QEMU registers layout.
The implementation has been simplified by doing the following modifications:
- removing the code for supporting the hardware without Unrestricted Guest (UG)
mode (including all the code to fallback on TCG emulation).
- not including the Darwin support (which is not yet debugged/tested).
- simplifying the initialization by removing the leftovers from the Android
specific code, then trimming down the remaining logic.
- removing the unused MemoryListener callbacks.
Signed-off-by: Vincent Palatin <vpalatin@chromium.org>
Message-Id: <e1023837f8d0e4c470f6c4a3bf643971b2bca5be.1484045952.git.vpalatin@chromium.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Move the generic cpu_synchronize_ functions to the common hw_accel.h header,
in order to prepare for the addition of a second hardware accelerator.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Vincent Palatin <vpalatin@chromium.org>
Message-Id: <f5c3cffe8d520011df1c2e5437bb814989b48332.1484045952.git.vpalatin@chromium.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
These are not needed since linux-headers/ provides up-to-date definitions.
The constants are in linux-headers/asm-powerpc/kvm.h.
The sole users, hw/intc/xics_kvm.c and target/ppc/kvm.c, include asm/kvm.h
via sysemu/kvm.h->linux/kvm.h.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In OpenSPARC T1+ TWINX ASIs in store instructions are aliased
with Block Initializing Store ASIs.
"UltraSPARC T1 Supplement Draft D2.1, 14 May 2007" describes them
in the chapter "5.9 Block Initializing Store ASIs"
Integer stores of all sizes are allowed with these ASIs.
Signed-off-by: Artyom Tarasenko <atar4qemu@gmail.com>
According to chapter 13.3 of the
UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005,
only the sun4u format is available for data-access loads.
Store UA2005 entries in the sun4u format to simplify processing.
Signed-off-by: Artyom Tarasenko <atar4qemu@gmail.com>
Implement the behavior described in the chapter 13.9.11 of
UltraSPARC T1™ Supplement to the UltraSPARC Architecture 2005:
"If a TLB Data-In replacement is attempted with all TLB
entries locked and valid, the last TLB entry (entry 63) is
replaced."
Signed-off-by: Artyom Tarasenko <atar4qemu@gmail.com>
Please note that QEMU doesn't impelement Real->Physical address
translation. The "Real Address" is always the "Physical Address".
Suggested-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Artyom Tarasenko <atar4qemu@gmail.com>
Accordinf to UA2005, 9.3.3 "Address Space Identifiers",
"In hyperprivileged mode, all instruction fetches and loads and stores with implicit
ASIs use a physical address, regardless of the value of TL".
Signed-off-by: Artyom Tarasenko <atar4qemu@gmail.com>
As described in Chapter 5.7.6 of the UltraSPARC Architecture 2005,
outstanding disrupting exceptions that are destined for privileged mode can only
cause a trap when the virtual processor is in nonprivileged or privileged mode and
PSTATE.ie = 1. At all other times, they are held pending.
Signed-off-by: Artyom Tarasenko <atar4qemu@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Use explicit register pointers while accessing D/I-MMU registers.
Call cpu_unassigned_access on access to missing registers.
Signed-off-by: Artyom Tarasenko <atar4qemu@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
while IMMU/DMMU is disabled
- ignore MMU-faults in hypervisorv mode or if CPU doesn't have hypervisor
- signal TT_INSN_REAL_TRANSLATION_MISS/TT_DATA_REAL_TRANSLATION_MISS otherwise
Signed-off-by: Artyom Tarasenko <atar4qemu@gmail.com>
RER and WER are privileged instructions for accessing external
registers. External register address space is local to processor core.
There's no alignment requirements, addressable units are 32-bit wide
registers.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJYeOOmAAoJEPvQ2wlanipE3ZUH/Rsfpl23kXCMmqoXEIhWXy+h
yf8ARWCmpU6UKfwb+sH4vLegBfU56f62vVkGQ6oaaAbuyQ4SxCUlZGMO/rqY8/TE
m57aM+VfEE+bIdinAtLjFM24EVp/exMfkeutK7ItzLv7GwlrBos0J5veyCuyJ15q
pccV24jrpbJGilEeJ2GblKp3r2I3dInQGauOQhtoP3MNjHmYNSQD7noSbdN/JiTR
9H2eV700pg3ZPaSfO+CTVQN+cHjK1FC6qLi6916YZY9llnSOnDAegBYgbwE1RIBw
AULpWrezYveKy71eFhHVtGxnPeCJ8J4GVECMK0P0cdxzprIXFh1kZezyM4bxAGk=
=sboI
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/stsquad/tags/pull-tcg-common-tlb-reset-20170113-r1' into staging
This is the same as the v3 posted except a re-base and a few extra signoffs
# gpg: Signature made Fri 13 Jan 2017 14:26:46 GMT
# gpg: using RSA key 0xFBD0DB095A9E2A44
# gpg: Good signature from "Alex Bennée (Master Work Key) <alex.bennee@linaro.org>"
# Primary key fingerprint: 6685 AE99 E751 67BC AFC8 DF35 FBD0 DB09 5A9E 2A44
* remotes/stsquad/tags/pull-tcg-common-tlb-reset-20170113-r1:
cputlb: drop flush_global flag from tlb_flush
cpu_common_reset: wrap TCG specific code in tcg_enabled()
qom/cpu: move tlb_flush to cpu_common_reset
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the useless is_external argument. Since the iohandler
AioContext is never used for block devices, aio_disable_external
is never called on it. This lets us remove stubs/iohandler.c.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Configuration overlay does not explicitly say whether there are ICACHE
and DCACHE in the core. Current code uses XCHAL_[ID]CACHE_WAYS to detect
if corresponding cache option is enabled, but that's not correct: on
cores without cache these macros are defined as 1, not as 0.
Check XCHAL_[ID]CACHE_SIZE instead.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
There's no point in continuing translating guest instructions once an
unconditional exception is thrown.
There's also no point in updating pc before any instruction is
translated, don't do it.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Delimit each instruction that may access timers or IRQ state with
qemu_io_start/qemu_io_end, so that qemu-system-xtensa could be run with
-icount option.
Raise EXCP_YIELD after CCOMPARE reprogramming to let tcg_cpu_exec
recalculate how long this CPU is allowed to run.
RSR now may need to terminate TB, but it can't be done in RSR handler
because the same handler is used for XSR together with WSR handler, which
may also need to terminate TB. Change RSR and WSR handlers return type
to bool indicating whether TB termination is needed (RSR) or has been
done (WSR), and add TB termination after RSR/WSR dispatcher call.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Xtensa cores may have a register (CCOUNT) that counts core clock cycles.
It may also have a number of registers (CCOMPAREx); when CCOUNT value
passes the value of CCOMPAREx, timer interrupt x is raised.
Currently xtensa target counts a number of completed instructions and
assumes that for CCOUNT one instruction takes one cycle to complete.
It calls helper function to update CCOUNT register at every TB end and
raise timer interrupts. This scheme works very predictably and doesn't
have noticeable performance impact, but it is hard to use with multiple
synchronized processors, especially with coming MTTCG.
Derive CCOUNT from the virtual simulation time, QEMU_CLOCK_VIRTUAL.
Use native QEMU timers for CCOMPARE timers, one timer for each register.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
RUNSTALL signal stalls core execution while it's applied. It is widely
used in multicore configurations to control activity of additional
cores.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Xtensa cores may have two distinct addresses for the static vectors
group. Provide a function to select one of them.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
On 680x0 family only.
Address Register indirect With postincrement:
When using the stack pointer (A7) with byte size data, the register
is incremented by two.
Address Register indirect With predecrement:
When using the stack pointer (A7) with byte size data, the register
is decremented by two.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Thomas Huth <huth@tuxfamily.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <1484332593-16782-6-git-send-email-laurent@vivier.eu>
In these cases we must update the address register after
the operation.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <1484332593-16782-4-git-send-email-laurent@vivier.eu>
gen_flush_flags() is setting unconditionally cc_op_synced to 1
and s->cc_op to CC_OP_FLAGS, whereas env->cc_op can be set
to something else by a previous tcg fragment.
We fix that by not setting cc_op_synced to 1
(except for gen_helper_flush_flags() that updates env->cc_op)
FIX: https://github.com/vivier/qemu-m68k/issues/19
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <1484332593-16782-3-git-send-email-laurent@vivier.eu>
M680x0 bit operations with an immediate value use 9 bits of the 16bit
value, while coldfire ones use only 8 bits.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <1484332593-16782-2-git-send-email-laurent@vivier.eu>
We have never has the concept of global TLB entries which would avoid
the flush so we never actually use this flag. Drop it and make clear
that tlb_flush is the sledge-hammer it has always been.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
[DG: ppc portions]
Acked-by: David Gibson <david@gibson.dropbear.id.au>
It is a common thing amongst the various cpu reset functions want to
flush the SoftMMU's TLB entries. This is done either by calling
tlb_flush directly or by way of a general memset of the CPU
structure (sometimes both).
This moves the tlb_flush call to the common reset function and
additionally ensures it is only done for the CONFIG_SOFTMMU case and
when tcg is enabled.
In some target cases we add an empty end_of_reset_fields structure to the
target vCPU structure so have a clear end point for any memset which
is resetting value in the structure before CPU_COMMON (where the TLB
structures are).
While this is a nice clean-up in general it is also a precursor for
changes coming to cputlb for MTTCG where the clearing of entries
can't be done arbitrarily across vCPUs. Currently the cpu_reset
function is usually called from the context of another vCPU as the
architectural power up sequence is run. By using the cputlb API
functions we can ensure the right behaviour in the future.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
The new typename attribute on query-cpu-definitions will be used
to help management software use device-list-properties to check
which properties can be set using -cpu or -global for the CPU
model.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <1479320499-29818-1-git-send-email-ehabkost@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
In commit c52ab08aee,
the patch snippet for the "syscall" insn got applied to "iret".
Signed-off-by: Doug Evans <dje@google.com>
Message-Id: <f403045cde4049058c05446d5c04@google.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
If D[15] is != sign_ext(const4) then PC will be set to (PC +
zero_ext(disp4 + 16)).
[BK: fixed style errors]
Signed-off-by: Peer Adelt <peer.adelt@c-lab.de>
Message-Id: <1465314555-11501-5-git-send-email-peer.adelt@c-lab.de>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Puts the content of data register D[a] into E[c][63:32] and the
content of data register D[b] into E[c][31:0].
[BK: fix style error]
[BK: Allocate temporaries only when needed]
Signed-off-by: Peer Adelt <peer.adelt@c-lab.de>
Message-Id: <1465314555-11501-4-git-send-email-peer.adelt@c-lab.de>
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Multiplies D[a] and D[b] and adds/subtracts the result to/from D[d].
The result is put in D[c]. All operands are floating-point numbers.
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Converts a 32-bit floating point number to an unsigned int. The
result is rounded towards zero.
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Use the new primitives for RDWINM and RLDICL.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Richard Henderson <rth@twiddle.net>
A couple of places where it was easy to identify a right-shift
followed by an extract or and-with-immediate, and the obvious
sign-extract from a high byte register.
Acked-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This is a cleanup patch. It adds call to tcg_temp_free()
when it is missing.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Also manage word and byte operands and fix the computation of
overflow in the case of M68000 arithmetic shifts.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <1478699171-10637-4-git-send-email-rth@twiddle.net>
Report this properly via exception and, importantly, allow
the disassembler the chance to tell us what insn is not handled.
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <1478699171-10637-3-git-send-email-rth@twiddle.net>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
680x0 movem can load/store words and long words and can use more
addressing modes. Coldfire can only use long words with (Ax) and
(d16,Ax) addressing modes.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <1478699171-10637-2-git-send-email-rth@twiddle.net>
Implement CAS using cmpxchg.
Implement CAS2 using helper and either cmpxchg when
the 32bit addresses are consecutive, or with
parallel_cpus+cpu_loop_exit_atomic() otherwise.
Suggested-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Update helper to set the throwing location in case of div-by-0.
Cleanup divX.w and add quad word variants of divX.l.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <rth@twidle.net>
[laurent: modified to clear Z on overflow, as found with risu]
Provide gen_lea_mode and gen_ea_mode, where the mode can be
specified manually, rather than taken from the instruction.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <1478206203-4606-3-git-send-email-rth@twiddle.net>
ARM1176 CPUs have TrustZone support and can use the Vector Base
Address Register, but currently, qemu only adds VBAR support to ARMv7
CPUs. Fix this by adding a new feature ARM_FEATURE_VBAR which can used
for ARMv7 and ARM1176 CPUs.
The VBAR feature is always set for ARMv7 because some legacy boards
require it even if this is not architecturally correct.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-id: 1481810970-9692-1-git-send-email-clg@kaod.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We already log exception entry; add logging of the AArch64 exception
return path as well.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
We add s->be_data within do_vec_ld/st. Adding it here means that
we have the wrong bits set in SIZE for a big-endian host, leading
to g_assert_not_reached in write_vec_element and read_vec_element.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 1481085020-2614-3-git-send-email-rth@twiddle.net
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Since CPUARMState.vfp.regs is not 16 byte aligned, the ^ 8 fixup used
for a big-endian host doesn't do what's intended. Fix this by adding
in the vfp.regs offset after computing the inter-register offset.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 1481085020-2614-2-git-send-email-rth@twiddle.net
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The value of the MVFR1 (Media and VFP Feature Register 1) register for
the Cortex-A8 appears to be incorrect (according to the TRM, DDI0344K),
with the "full denormal arithmetic" and "propagation of NaN" fields
holding both 0 instead of both 1.
I had a go tracing the history of the use of this value, and it seems
it's always just been wrong in QEMU: maybe it was derived from early
documentation, or guessed based on the use of a "VFP Lite" implementation
in the Cortex-A8.
Depending on the startup/early-boot code in use, this can manifest as
failure to perform denormal arithmetic properly: in our case, selecting
a Cortex-A8 CPU when using QEMU as an instruction-set simulator for
bare-metal GCC testing caused tests using denormal arithmetic to
fail. Problems might be masked (or not occur) when using a full OS kernel
with suitable trap handlers (I'm not sure).
Signed-off-by: Julian Brown <julian@codesourcery.com>
Message-id: 1481130858-31767-1-git-send-email-julian@codesourcery.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The new paging more is extension of IA32e mode with more additional page
table level.
It brings support of 57-bit vitrual address space (128PB) and 52-bit
physical address space (4PB).
The structure of new page table level is identical to pml4.
The feature is enumerated with CPUID.(EAX=07H, ECX=0):ECX[bit 16].
CR4.LA57[bit 12] need to be set when pageing enables to activate 5-level
paging mode.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Message-Id: <20161215001305.146807-1-kirill.shutemov@linux.intel.com>
[Drop changes to target-i386/translate.c. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The syscall and sysret instructions behave a bit differently:
TF is checked after the instruction completes.
This allows the o/s to disable #DB at a syscall by adding TF to FMASK.
And then when the sysret is executed the #DB is taken "as if" the
syscall insn just completed.
Signed-off-by: Doug Evans <dje@google.com>
Message-Id: <94eb2c0bfa1c6a9fec0543057483@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Check for KVM_CAP_ADJUST_CLOCK capability KVM_CLOCK_TSC_STABLE, which
indicates that KVM_GET_CLOCK returns a value as seen by the guest at
that moment.
For new machine types, use this value rather than reading
from guest memory.
This reduces kvmclock difference on migration from 5s to 0.1s
(when max_downtime == 5s).
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Message-Id: <20161121105052.598267440@redhat.com>
[Add comment explaining what is going on. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The remote protocol can't handle flipping back and forth
between 32-bit and 64-bit regs. To compensate, pretend "as if"
on 64-bit cpu when in 32-bit mode.
Signed-off-by: Doug Evans <dje@google.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <001a113dca8274572005406e03c3@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We've currently got 18 architectures in QEMU, and thus 18 target-xxx
folders in the root folder of the QEMU source tree. More architectures
(e.g. RISC-V, AVR) are likely to be included soon, too, so the main
folder of the QEMU sources slowly gets quite overcrowded with the
target-xxx folders.
To disburden the main folder a little bit, let's move the target-xxx
folders into a dedicated target/ folder, so that target-xxx/ simply
becomes target/xxx/ instead.
Acked-by: Laurent Vivier <laurent@vivier.eu> [m68k part]
Acked-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de> [tricore part]
Acked-by: Michael Walle <michael@walle.cc> [lm32 part]
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com> [s390x part]
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> [s390x part]
Acked-by: Eduardo Habkost <ehabkost@redhat.com> [i386 part]
Acked-by: Artyom Tarasenko <atar4qemu@gmail.com> [sparc part]
Acked-by: Richard Henderson <rth@twiddle.net> [alpha part]
Acked-by: Max Filippov <jcmvbkbc@gmail.com> [xtensa part]
Reviewed-by: David Gibson <david@gibson.dropbear.id.au> [ppc part]
Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> [crisµblaze part]
Acked-by: Guan Xuetao <gxt@mprc.pku.edu.cn> [unicore32 part]
Signed-off-by: Thomas Huth <thuth@redhat.com>