Compute which register bank to use once at the start of translation.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170718200255.31647-14-rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
We were treating FREG as an index and REG as a TCGv.
Making FREG return a TCGv is both less confusing and
a step toward cleaner banking of cpu_fregs.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170718200255.31647-12-rth@twiddle.net>
[aurel32: fix whitespace issues]
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Compute which register bank to use once at the start of translation.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170718200255.31647-11-rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
For many of the sequences produced by gcc or glibc,
we can translate these as host atomic operations.
Which saves the need to acquire the exclusive lock.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170718200255.31647-8-rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
For uniprocessors, SH4 uses optimistic restartable atomic sequences.
Upon an interrupt, a real kernel would simply notice magic values in
the registers and reset the PC to the start of the sequence.
For QEMU, we cannot do this in quite the same way. Instead, we notice
the normal start of such a sequence (mov #-x,r15), and start a new TB
that can be executed under cpu_exec_step_atomic.
Reported-by: Bruno Haible <bruno@clisp.org>
LP: https://bugs.launchpad.net/bugs/1701971
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170718200255.31647-7-rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Don't leave an unused bit after DELAY_SLOT_MASK.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170718200255.31647-6-rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
If we mask off any out-of-band bits before we assign to the
variable, then we don't need to clean it up when reading.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170718200255.31647-5-rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
We'll be putting more things into this bitmask soon.
Let's have a name that covers all possible uses.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170718200255.31647-4-rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
We can fold 3 different tests within the decode loop
into a more accurate computation of max_insns to start.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170718200255.31647-3-rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Since that the T bit of the SR register is mapped using a TGC global,
it's better to return the value through TCG than writing it directly. It
allows to declare the helpers with the flag TCG_CALL_NO_WG.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170702202814.27793-5-aurelien@aurel32.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
There is no need to use a helper to flip one bit, just use a TCG xor
instruction instead.
Message-Id: <20170702202814.27793-5-aurelien@aurel32.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
The floating-point status/control register contains cause and flag
bits. The cause bits are set to 0 before executing the instruction,
while the flag bits hold the status of the exception generated after
the field was last cleared.
Message-Id: <20170702202814.27793-4-aurelien@aurel32.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
In case of unordered compare, the fcmp instructions should either
trigger and invalid exception (if enabled) or set T=0. The existing code
left it unchanged.
LP: https://bugs.launchpad.net/qemu/+bug/1701821
Reported-by: Bruno Haible <bruno@clisp.org>
Message-Id: <20170702202814.27793-3-aurelien@aurel32.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
The SH4 manual is not fully clear about that, but real hardware do not
check for the PR bit, which allows to select between single or double
precision, for the fabs instruction. This is probably what is meant by
"Same operation is performed regardless of precision."
Remove the check, and at the same time use a TCG instruction instead of
a helper to clear one bit.
LP: https://bugs.launchpad.net/qemu/+bug/1701821
Reported-by: Bruno Haible <bruno@clisp.org>
Message-Id: <20170702202814.27793-2-aurelien@aurel32.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
- add a CPU model for the IBM z14 which was announced on July 17th 2017
- update linux headers to 4.13-rc0 to get a fix for an ioctl definition
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.14 (GNU/Linux)
iQIcBAABAgAGBQJZbc04AAoJEBF7vIC1phx8xxkP/Rf/Y7jCvFzDXvE2+tJwkETi
9l6un+gyoW64l4vI7RDT7wTl/A7uBNxZxJMXprFKaE2Kcg3D8IWdFq/RudmuJCJ5
SzY4X/L1kWHdKxVzeEF6RVqKfeNnA2Hk4KggzDMqpXxztjenvJiJYHqL0n3+edLS
LHfi4z5PMzXHMF9plE7Usw0GMp+8HzVe3Bk/d+gqBMG9TDTPJXdGP/E9FEDij4dE
GcNNmilzowkz9JZh5Gw92oq7iLXoCMbf9QUiu5IF1Gqd/kw9tFYzWtq26nR8GJyf
sr7DeNbUiOi6xJA/CWTwwlwS3FZXXNjbQPumRfP1kRllyHKyRIAHxhKVJajTDgQg
FVZ46GAWuUnK38COhkuNN67zoH8YzRgHwn/Ls6RxTtGlJe1xlg2GgNJSxavWh6FE
Hwr1twe13A/QyFN7dVyy52F/WYIPxggwUqEE7Xh5YL3eDIaPuC/2bKS5pTZmGrJZ
47S+FwBGPvkt2+DmpynCShN1APpLLH+Ap448twYkEbOqoYMD8zoqZj2LPtZA13Rp
htb56v0bRgTFrLQcIpUn+QeIsIgAde/FQl6PvbQujVJlwUG8mYCiin/4brVXRRFr
JCEKjSf7qEDEnK2SAefdIlrpeQT9kBuHpWW8/pLhEuJMQpTk2qwUwOlN4AKaJb0T
acsUZwXSl4hGR3L7k/VL
=jOvs
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/borntraeger/tags/s390x-20170718' into staging
s390: add z14 cpu model
- add a CPU model for the IBM z14 which was announced on July 17th 2017
- update linux headers to 4.13-rc0 to get a fix for an ioctl definition
# gpg: Signature made Tue 18 Jul 2017 09:56:24 BST
# gpg: using RSA key 0x117BBC80B5A61C7C
# gpg: Good signature from "Christian Borntraeger (IBM) <borntraeger@de.ibm.com>"
# Primary key fingerprint: F922 9381 A334 08F9 DBAB FBCA 117B BC80 B5A6 1C7C
* remotes/borntraeger/tags/s390x-20170718:
s390x/cpumodel: z14 cpu models
linux header sync against v4.13-rc1
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
-----BEGIN PGP SIGNATURE-----
iQIcBAABCAAGBQJZbQX2AAoJECgHk2+YTcWmVVgP/jJ+ctRt2PL8KMZQffcF8j+G
ij5ZVa6C/8dwA+KwYa9HEVVPe/R7SyGw51BQidk/5u5L/w+ROx9teH/KX6phG1q1
Zq8BxL1lIlSElneUEULm+tsxc+CDhXoH45XU8/7252VnzHN8w4B/og86osWwjtYA
ShBNM6uhFTGrCl7fwrQldw3b33dznUpp4oI8lmLKFgyeUb6gjNk5ws1wDyPsO6ns
pBYAoKvrdz6mJ/LCxufmHcexd7BMUoPmvp8SKqViK3ZrBFs0R0Ys6FFc0SIUuKzd
Vc0FOTQPVnMfqi6EhzK6XW0I2odZ4n7MukoRnEYCU37WwYB0cpA+aVZuw/ZUj/cP
sXrwi8O2QCSXUIa5ZQ/yBOsA6ZYkD90rALQEsJgzDiHqSG77tKkG8lZtEaAdPuFl
eVTME0c7khA0aO9PXORAUqfJ8Av9+S8fWJ80A6duGkCxokqO0edLGAVFIFF5P1v7
4DtvV45U3q0FQ/L21L08TlgXW0tlpOIEwc3UFeDoo+c+kZRkIlWhca47OLWozyus
N24ku4cDZVmNYCJbKBWX6CECP7EfN8cFwVR7dCy22p1mwPWdQyQxx0pz3LQVJIab
ccmluZmPX9zqQj/ecKMWY5GMvLw51c5hkP7r5hPwSHgMBNkt0uF2C4aZYBk/n6A1
hj+EEKcaUJCnqO3EW5La
=Vt6Z
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/ehabkost/tags/x86-and-machine-pull-request' into staging
x86 and machine queue, 2017-07-17
# gpg: Signature made Mon 17 Jul 2017 19:46:14 BST
# gpg: using RSA key 0x2807936F984DC5A6
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>"
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6
* remotes/ehabkost/tags/x86-and-machine-pull-request:
qmp: Include parent type on 'qom-list-types' output
qmp: Include 'abstract' field on 'qom-list-types' output
tests: Simplify abstract-interfaces check with a helper
i386: add Skylake-Server cpu model
i386: Update comment about XSAVES on Skylake-Client
i386: expose "TCGTCGTCGTCG" in the 0x40000000 CPUID leaf
fw_cfg: move QOM type defines and fw_cfg types into fw_cfg.h
fw_cfg: move qdev_init_nofail() from fw_cfg_init1() to callers
fw_cfg: switch fw_cfg_find() to locate the fw_cfg device by type rather than path
qom: Fix ambiguous path detection when ambiguous=NULL
Revert "machine: Convert abstract typename on compat_props to subclass names"
test-qdev-global-props: Test global property ordering
qdev: fix the order compat and global properties are applied
tests: Test case for object_resolve_path*()
device-crash-test: Fix regexp on whitelist
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* new model of the ARM MPS2/MPS2+ FPGA based development board
* clean up DISAS_* exit conditions and fix various regressions
since commits e75449a3468a6b28c7b5 (in particular including
ones which broke OP-TEE guests)
* make Cortex-M3 and M4 correctly default to 8 PMSA regions
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABCAAGBQJZbLEBAAoJEDwlJe0UNgzeTqsP/06M2a/rswUKjIGAsXv+TeTl
5N31g9E6Jr57HXK94Q0XtNkLlPwvIn97Dcv6VKg5+E8OgJx7ozldwZVFghWvMbOA
mbaikzgTRRUf6ydNTA4DtWYZPkaLNT86Vmb2T1GKS0nmw2ymd+hMLNk5vZd1jhDv
krHxwECI5e+u1INpw7erlQ2mqVP1NjvOuMNtjdAgtJ5tnjFRfQaVedePmr5qOuIK
xkYMKMNtled/KS0gP4TaSu5S012iYhzrpKISN/g4WHT/8kllr+iEowNAOJSJ6l38
oaBJJJCsLwnnV1nRClp4NNQv0Q/RXyIex5mPkeWERk4QU9adSDHnYJR7xn7JEyzV
l9o+av28bXA7l3C8BOi3ahSGh5cDu+hif0Biml/Kke7e4+1Lp3/QWSQ+p/E5PDDq
rhk65cg07PxSHeogN8hgu+RYN0gF3WBKASwUIDAkVdBsLlH8LVmoT5DtllL+6PyY
cwCp3nWeu0q2YDxGOfCZrUC4YJMl8hqHoWbdVah8vLKV/w/JVUtVEIol0za50dzG
ii6wOLqzV8GH0vkVa5x0InlH+t+/LtDRVkgHUT3/64eEEG+SsK/GmZeEtvcmp7GP
7Qx+Dd7hPgh+uis0XZPz37vqyCYhaFNw1+M9EESlQKUKfdY8B5B5bpXVDOBF+0Zl
daOoMw8xBd21DXNk9tCk
=gVxi
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20170717' into staging
target-arm queue:
* new model of the ARM MPS2/MPS2+ FPGA based development board
* clean up DISAS_* exit conditions and fix various regressions
since commits e75449a3468a6b28c7b5 (in particular including
ones which broke OP-TEE guests)
* make Cortex-M3 and M4 correctly default to 8 PMSA regions
# gpg: Signature made Mon 17 Jul 2017 13:43:45 BST
# gpg: using RSA key 0x3C2525ED14360CDE
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>"
# gpg: aka "Peter Maydell <pmaydell@gmail.com>"
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>"
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* remotes/pmaydell/tags/pull-target-arm-20170717:
MAINTAINERS: Add entries for MPS2 board
hw/arm/mps2: Add ethernet
hw/arm/mps2: Add SCC
hw/misc/mps2_scc: Implement MPS2 Serial Communication Controller
hw/arm/mps2: Add timers
hw/char/cmsdk-apb-timer: Implement CMSDK APB timer device
hw/arm/mps2: Add UARTs
hw/char/cmsdk-apb-uart.c: Implement CMSDK APB UART
hw/arm/mps2: Implement skeleton mps2-an385 and mps2-an511 board models
target/arm: use DISAS_EXIT for eret handling
target/arm: use gen_goto_tb for ISB handling
target/arm/translate: ensure gen_goto_tb sets exit flags
target/arm/translate.h: expand comment on DISAS_EXIT
target/arm/translate: make DISAS_UPDATE match declared semantics
include/exec/exec-all: document common exit conditions
target/arm: Make Cortex-M3 and M4 default to 8 PMSA regions
qdev: support properties which don't set a default value
qdev-properties.h: Explicitly set the default value for arraylen properties
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This patch introduces the CPU model for z14, along with all base and
optional features.
Signed-off-by: Jason J. Herne <jjherne@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
The rotation is to the left, but extract shifts to the right.
The computation of the extract parameters needs adjusting.
For the entry condition, simplify
64 - rot + len <= 64
-rot + len <= 0
len <= rot
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reported-by: David Hildenbrand <david@redhat.com>
Suggested-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
STFL bit 4 and 5 are just indications to the guest, which TLB entries an
IDTE call will clear. These are performance indicators for the guest.
STFL bit 4:
INVALIDATE DAT TABLE ENTRY (IDTE) performs
the invalidation-and-clearing operation by
selectively clearing TLB segment-table entries
when a segment-table entry or entries are
invalidated. IDTE also performs the clearing-by-
ASCE operation. Unless bit 4 is one, IDTE simply
purges all TLBs. Bit 3 is one if bit 4 is one.
We can simply set STFL bit 4 ("idtes") and still purge the complete TLB.
Purging more than advertised is never bad. E.g. Linux doesn't even care
about this bit. We can optimized this later.
This is helpful, as the z9 base model contains this facility.
STFL bit 5 (clearing TLB region-table-entries) was never implemented on
real HW, therefore we can simply ignore it for now.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170627161032.5014-1-david@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Drop TRT from the set of insns handled internally by EXECUTE.
It's more important to adjust the existing helper to handle
both TRT and TRTR.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Since we require all registers saved on input, read R0 from ENV instead
of passing it manually. Recognize the specification exception when R0
contains incorrect data. Keep high bits of result registers unmodified
when in 31 or 24-bit mode.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Introduce Skylake-Server cpu mode which inherits the features from
Skylake-Client and supports some additional features that are: AVX512,
CLWB and PGPE1GB.
Signed-off-by: Boqun Feng (Intel) <boqun.feng@gmail.com>
Message-Id: <20170621052935.20715-1-boqun.feng@gmail.com>
[ehabkost: copied comment about XSAVES from Skylake-Client]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Currently when running KVM, we expose "KVMKVMKVM\0\0\0" in
the 0x40000000 CPUID leaf. Other hypervisors (VMWare,
HyperV, Xen, BHyve) all do the same thing, which leaves
TCG as the odd one out.
The CPUID signature is used by software to detect which
virtual environment they are running in and (potentially)
change behaviour in certain ways. For example, systemd
supports a ConditionVirtualization= setting in unit files.
The virt-what command can also report the virt type it is
running on
Currently both these apps have to resort to custom hacks
like looking for 'fw-cfg' entry in the /proc/device-tree
file to identify TCG.
This change thus proposes a signature "TCGTCGTCGTCG" to be
reported when running under TCG.
To hide this, the -cpu option tcg-cpuid=off can be used.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Message-Id: <20170509132736.10071-3-berrange@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Use the same mask to avoid having to load two different constants.
Suggested-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
This patch fixes setting DExcCode field of CP0 Debug register
when SDBBP instruction is executed. According to EJTAG specification,
this field must be set to the value 9 (Bp).
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Message-id: 20170502120350.3368.92338.stgit@PASHA-ISP
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Previously DISAS_JUMP did ensure this but with the optimisation of
8a6b28c7 (optimize indirect branches) we might not leave the loop.
This means if any pending interrupts are cleared by changing IRQ flags
we might never get around to servicing them. You usually notice this
by seeing the lookup_tb_ptr() helper gainfully chaining TBs together
while cpu->interrupt_request remains high and the exit_request has not
been set.
This breaks amongst other things the OPTEE test suite which executes
an eret from the secure world after a non-secure world IRQ has gone
pending which then never gets serviced.
Instead of using the previously implied semantics of DISAS_JUMP we use
DISAS_EXIT which will always exit the run-loop.
CC: Etienne Carriere <etienne.carriere@linaro.org>
CC: Joakim Bech <joakim.bech@linaro.org>
CC: Jaroslaw Pelczar <j.pelczar@samsung.com>
CC: Peter Maydell <peter.maydell@linaro.org>
CC: Emilio G. Cota <cota@braap.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 20170713141928.25419-7-alex.bennee@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
While an ISB will ensure any raised IRQs happen on the next
instruction it doesn't cause any to get raised by itself. We can
therefore use a simple tb exit for ISB instructions and rely on the
exit_request check at the top of each TB to deal with exiting if
needed.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 20170713141928.25419-6-alex.bennee@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
As the gen_goto_tb function can do both static and dynamic jumps it
should also set the is_jmp field. This matches the behaviour of the
a64 code.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 20170713141928.25419-5-alex.bennee@linaro.org
[tweak to multiline comment formatting]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We already have an exit condition, DISAS_UPDATE which will exit the
run-loop. Expand on the difference with DISAS_EXIT in the comments.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 20170713141928.25419-4-alex.bennee@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
DISAS_UPDATE should be used when the wider CPU state other than just
the PC has been updated and we should therefore exit the TCG runtime
and return to the main execution loop rather assuming DISAS_JUMP would
do that.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 20170713141928.25419-3-alex.bennee@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The Cortex-M3 and M4 CPUs always have 8 PMSA MPU regions (this isn't
a configurable option for the hardware). Make the default value of
the pmsav7-dregion property be set per-cpu, so we don't need to have
every user of these CPUs set it manually. (The existing default of
16 is correct for the other PMSAv7 core, the Cortex-R5.)
This fixes a bug where we were creating the M3 and M4 with
too many regions; most guest software would not notice or
care, though, since it would just not use the registers
associated with the unexpected extra regions.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-id: 1499788408-10096-4-git-send-email-peter.maydell@linaro.org
But when a guest initializes radix mode, it issues a H_REGISTER_PROC_TBL
to update the LPCR of all CPUs. Hot-plugged CPUs inherit from the same
setting under KVM but not under TCG. So, Let's check for radix and update
the default LPCR to keep new CPUs in sync.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
So far, qemu implements the PAPR Hash Page Table (HPT) resizing extension
with TCG. The same implementation will work with KVM PR, but we don't
currently allow that. For KVM HV we can only implement resizing with the
assistance of the host kernel, which needs a new capability and ioctl()s.
This patch adds support for testing the new KVM capability and implementing
the resize in terms of KVM facilities when necessary. If we're running on
a kernel which doesn't have the new capability flag at all, we fall back to
testing for PR vs. HV KVM using the same hack that we already use in a
number of places for older kernels.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This patch implements hypercalls allowing a PAPR guest to resize its own
hash page table. This will eventually allow for more flexible memory
hotplug.
The implementation is partially asynchronous, handled in a special thread
running the hpt_prepare_thread() function. The state of a pending resize
is stored in SPAPR_MACHINE->pending_hpt.
The H_RESIZE_HPT_PREPARE hypercall will kick off creation of a new HPT, or,
if one is already in progress, monitor it for completion. If there is an
existing HPT resize in progress that doesn't match the size specified in
the call, it will cancel it, replacing it with a new one matching the
given size.
The H_RESIZE_HPT_COMMIT completes transition to a resized HPT, and can only
be called successfully once H_RESIZE_HPT_PREPARE has successfully
completed initialization of a new HPT. The guest must ensure that there
are no concurrent accesses to the existing HPT while this is called (this
effectively means stop_machine() for Linux guests).
For now H_RESIZE_HPT_COMMIT goes through the whole old HPT, rehashing each
HPTE into the new HPT. This can have quite high latency, but it seems to
be of the order of typical migration downtime latencies for HPTs of size
up to ~2GiB (which would be used in a 256GiB guest).
In future we probably want to move more of the rehashing to the "prepare"
phase, by having H_ENTER and other hcalls update both current and
pending HPTs. That's a project for another day, but should be possible
without any changes to the guest interface.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This introduces stub implementations of the H_RESIZE_HPT_PREPARE and
H_RESIZE_HPT_COMMIT hypercalls which we hope to add in a PAPR
extension to allow run time resizing of a guest's hash page table. It
also adds a new machine property for controlling whether this new
facility is available.
For now we only allow resizing with TCG, allowing it with KVM will require
kernel changes as well.
Finally, it adds a new string to the hypertas property in the device
tree, advertising to the guest the availability of the HPT resizing
hypercalls. This is a tentative suggested value, and would need to be
standardized by PAPR before being merged.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
- add a network boot rom for s390 (Thomas Huth)
- migration of storage attributes like the CMMA used/unused state
- PCI related enhancements - full support for aen, ais and zpci
- migration support for css with vmstates (Halil Pasic)
- cpu model enhancements for cpu features
- guarded storage support
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.14 (GNU/Linux)
iQIcBAABAgAGBQJZaJ3gAAoJEBF7vIC1phx8VSAP/1zKh7ti4Y2dIVb94c1tvECE
LRNdCdAPhEqL6zybty85aG04sjAmSu50NGfo5t8AGq1U9WBWrCy7/wWSFdK2GI63
Umc1fR7aBF9FiFayKONhExaREh6gSWVHZF1RyaPIWnnjRIeX8nqgPEnpdZNiVVrG
5cKHV2SUd5pMDJUiQdZGZgbgG1c+MWJx2BHoduM+K0UnmFjpyLCL4Rq58Q2Q87Nj
/+yPSVApFFeMsDpem6DNttE6Msa+V+K+EmRhRKqZNOWrdRKH5vvj6Fl/LSxVtd9c
CEG+aZGjFd693uP9ge0WmjeUJtVHIGt9xKdeU0d7FijZWehjsIqalLoqapzK8ddF
h6HJuNsmk/SZF7O9JsbHT3Epyr+7Hk0dx78Ku1GNQuUxtFL93eyIJmRdgz7Zo3Lj
ZTPJvCA13GjPWtgzG5dG3JH1hiAS+Yai18BgdzGbs+qfMCwPdbWkoqg7sARwAJNe
50fo/ayJvcmHJnSNO6hErFoU38WctGgO8fWp+oVvD8Um1ny1aBFFuJgJIMf47nhu
x1IdA6UGrNN0yNC4/UgyYBDV1hfvo/phMdoHqle9AcMmPYOD1DBr0genK/bYbICk
Dio7og9nKgheLRBHz2u5TuYcCsfE/7rtwZX+iXMvoC7VE7Dqs+Q7Zjwwwtwj4x9F
FwWuf/Bv1s6IkVLlP8Ow
=2bOV
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/borntraeger/tags/s390x-20170714' into staging
s390x/kvm/migration/cpumodel: fixes, enhancements and cleanups
- add a network boot rom for s390 (Thomas Huth)
- migration of storage attributes like the CMMA used/unused state
- PCI related enhancements - full support for aen, ais and zpci
- migration support for css with vmstates (Halil Pasic)
- cpu model enhancements for cpu features
- guarded storage support
# gpg: Signature made Fri 14 Jul 2017 11:33:04 BST
# gpg: using RSA key 0x117BBC80B5A61C7C
# gpg: Good signature from "Christian Borntraeger (IBM) <borntraeger@de.ibm.com>"
# Primary key fingerprint: F922 9381 A334 08F9 DBAB FBCA 117B BC80 B5A6 1C7C
* remotes/borntraeger/tags/s390x-20170714: (40 commits)
s390x/gdb: add gs registers
s390x/arch_dump: also dump guarded storage control block
s390x/kvm: enable guarded storage
s390x/kvm: Enable KSS facility for nested virtualization
s390x/cpumodel: add esop/esop2 to z12 model
s390x/cpumodel: we are always in zarchitecture mode
s390x/cpumodel: wire up new hardware features
s390x/flic: migrate ais states
s390x/cpumodel: add zpci, aen and ais facilities
s390x: initialize cpu firstly
pc-bios/s390: rebuild s390-ccw.img
pc-bios/s390: add s390-netboot.img
pc-bios/s390-ccw: Link libnet into the netboot image and do the TFTP load
pc-bios/s390-ccw: Add virtio-net driver code
pc-bios/s390-ccw: Add core files for the network bootloading program
roms/SLOF: Update submodule to latest status
pc-bios/s390-ccw: Add code for virtio feature negotiation
pc-bios/s390-ccw: Remove unused structs from virtio.h
pc-bios/s390-ccw: Move byteswap functions to a separate header
pc-bios/s390-ccw: Add a write() function for stdio
...
Conflicts:
target/s390x/kvm.c
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Introduce guarded storage support for KVM guests on s390.
We need to enable the capability, extend machine check validity,
sigp store-additional-status-at-address, and migration.
The feature is fenced for older machine type versions.
Signed-off-by: Fan Zhang <zhangfan@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
If the host supports keyless subset (KSS) then first level
guest (G2) should enable KSS facility as well.
Signed-off-by: Farhan Ali <alifm@linux.vnet.ibm.com>
Reviewed-by: Eric Farman <farman@linux.vnet.ibm.com>
Acked-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Add esop and esop2 features to z12 model where esop2 was originally
introduced. Disable esop and esop2 when using compatibility machine
v2.9 or earlier.
Signed-off-by: Jason J. Herne <jjherne@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
In QEMU, a guest VCPU always started in and never was able to leave
z/Architecture mode. Now we have an architected way of showing this
condition.
The SIGP SET ARCHITECTURE instruction is simply rejected. Linux as guest
seems to not care about the return value, which is a good thing
The new handling is just like already being in z/Architecture mode.
We'll not try to fake absence of this facility, but still not indicate
the facility in case some strange CPU model turned z/Architecture off
completely (which doesn't work either way but let's us see how a
guest would react on a lack of this facility).
Signed-off-by: Jason J. Herne <jjherne@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Some new guest features have been introduced recently. Let's wire
them up in the CPU model.
Signed-off-by: Jason J. Herne <jjherne@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
[split patch]
zPCI instructions and facilities are available since IBM zEnterprise
EC12. To support z/PCI in QEMU we enable zpci, aen and ais facilities
starting with zEC12 GA1. And we always set zpci and aen bits in max cpu
model. Later they might be switched off due to applied real cpu model.
For ais bit, we only provide it in the full cpu model beginning with
zEC12 and defer its enablement in the default cpu model to a later point
in time. At the same time, disable them for 2.9 and older machines.
Because of introducing AIS facility, we could check if it's enabled to
initialize flic->ais_supported with the real value.
Signed-off-by: Yi Min Zhao <zyimin@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Currently, we do nothing for the SIC instruction, but we need to
implement it properly. Let's add proper handling in the backend code.
Co-authored-by: Yi Min Zhao <zyimin@linux.vnet.ibm.com>
Signed-off-by: Yi Min Zhao <zyimin@linux.vnet.ibm.com>
Signed-off-by: Fei Li <sherrylf@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Provide a mechanism to disable features in compatibility machines.
Signed-off-by: Jason J. Herne <jjherne@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Clean up spacing and add comments to clarify difference between base, full and
default models.
Not having spacing around the model definitions in gen-features.c is
particularly frustrating as the reader tends to misinterpret which model they
are looking at or editing.
Signed-off-by: Jason J. Herne <jjherne@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
The remaining non-const ones are in e1000e which modifies description at
runtime. They can be addressed separatedly.
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-Id: <20170714021509.23681-6-famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Frontends should have an interface to setup the handler of a backend change.
The interface will be used in the next commits
Signed-off-by: Anton Nefedov <anton.nefedov@virtuozzo.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <1499342940-56739-3-git-send-email-anton.nefedov@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Let's keep track of cmma enablement and move the mem_path check into
the actual enablement. This now also warns users that do not use
cpu-models about disabled cmma when using huge pages.
Signed-off-by: Janosch Frank <frankja@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Convert all uses of error_report("warning:"... to use warn_report()
instead. This helps standardise on a single method of printing warnings
to the user.
All of the warnings were changed using these two commands:
find ./* -type f -exec sed -i \
's|error_report(".*warning[,:] |warn_report("|Ig' {} +
Indentation fixed up manually afterwards.
The test-qdev-global-props test case was manually updated to ensure that
this patch passes make check (as the test cases are case sensitive).
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Suggested-by: Thomas Huth <thuth@redhat.com>
Cc: Jeff Cody <jcody@redhat.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
Cc: Ronnie Sahlberg <ronniesahlberg@gmail.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Peter Lieven <pl@kamp.de>
Cc: Josh Durgin <jdurgin@redhat.com>
Cc: "Richard W.M. Jones" <rjones@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: Greg Kurz <groug@kaod.org>
Cc: Rob Herring <robh@kernel.org>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Peter Chubb <peter.chubb@nicta.com.au>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Marcel Apfelbaum <marcel@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: Alexander Graf <agraf@suse.de>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Acked-by: Greg Kurz <groug@kaod.org>
Acked-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed by: Peter Chubb <peter.chubb@data61.csiro.au>
Acked-by: Max Reitz <mreitz@redhat.com>
Acked-by: Marcel Apfelbaum <marcel@redhat.com>
Message-Id: <e1cfa2cd47087c248dd24caca9c33d9af0c499b0.1499866456.git.alistair.francis@xilinx.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Changes:
* Fix MSA copy_[s|u]_df corner case of rd = 0
* Update malta to load the initrd at the end of the low memory
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.5 (GNU/Linux)
iQIVAwUAWWTjzCI464bV95fCAQLS1w/+OZZe/gf5JQH38/07PZFI421cX4vaER7M
UdOkQWtgJ9/bI7BLq3E23r9YRHA5XhwR8TFB9bd3fozme/ObVOKaKYdBFF45kAB8
sjmItZ7RqWO0c6UEw4n4YugR2xExaequ6nfRExv3NF6F0fLAmRj1o6LNDH8OWcZo
D5p24BSLLQV/gKRGB8y/5oGOZysir207fMwxZKGDKGF9zq9iffB3gE7hGdSUU8Ll
AyUMh4wNWFu5F7nG7VzAKYL7NFAUDPV7Z/bJdJKmA4SFPrEdB1oBoiQ+hcUfatfa
wbKNqdfl7RQlo2vfbnmaTggnqQXlWPjPm64B7L2gaMXdboPPxY0Z6NZUlJMAxqHG
0ivkY9I6jfKlT/vj6VP8pK+OHJFFrpGbOSAH+C+aq1HsyV0K7YOvZSeXRjb6qH6f
pZHpZkcsHgF2kRMuMvJ55RE01IqmY9+aXll1KYHpZ4b1f7R4l03TJ9M56vr3Y+/j
LeGKH7GJl87dTFVBzpT0h0jFJvtEocFTebMkWNqbIBMzdSNFdbfigQ0NFY8vGjVy
ekF0wSapGt+mBbaJ7tZa9Dr/nIH+BamHsM4ye+LN19Qp8yP/vz8laLtW6nPetc88
ggMWf6qpL+6GSdbpbJpWzdBNz1N0GU3/NqcUynioN1e3X7zoWQZNalGuaHARn3yT
4KtQRi6VXzc=
=hfeO
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/yongbok/tags/mips-20170711' into staging
MIPS patches 2017-07-11
Changes:
* Fix MSA copy_[s|u]_df corner case of rd = 0
* Update malta to load the initrd at the end of the low memory
# gpg: Signature made Tue 11 Jul 2017 15:42:20 BST
# gpg: using RSA key 0x2238EB86D5F797C2
# gpg: Good signature from "Yongbok Kim <yongbok.kim@imgtec.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 8600 4CF5 3415 A5D9 4CFA 2B5C 2238 EB86 D5F7 97C2
* remotes/yongbok/tags/mips-20170711:
mips/malta: load the initrd at the end of the low memory
target/mips: fix msa copy_[s|u]_df rd = 0 corner case
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This patch fixes the msa copy_[s|u]_df instruction emulation when
the destination register rd is zero. Without this patch the zero
register would get clobbered, which should never happen because it
is supposed to be hardwired to 0.
Fix this corner case by explicitly checking rd = 0 and effectively
making these instructions emulation no-op in that case.
Signed-off-by: Miodrag Dinic <miodrag.dinic@imgtec.com>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Acked-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
For v7M, writes to the CONTROL register are only permitted for
privileged code. However even if the code is privileged, the
write must not affect the SPSEL bit in the CONTROL register
if the CPU is in Thread mode (as documented in the pseudocode
for the MSR instruction). Implement this, instead of permitting
SPSEL to be written in all cases.
This was causing mbed applications not to run, because the
RTX RTOS they use relies on this behaviour.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1498820791-8130-1-git-send-email-peter.maydell@linaro.org
When running with KVM enabled, you can choose between emulating the
gic in kernel or user space. If the kernel supports in-kernel virtualization
of the interrupt controller, it will default to that. If not, if will
default to user space emulation.
Unfortunately when running in user mode gic emulation, we miss out on
interrupt events which are only available from kernel space, such as the timer.
This patch leverages the new kernel/user space pending line synchronization for
timer events. It does not handle PMU events yet.
Signed-off-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Message-id: 1498577737-130264-1-git-send-email-agraf@suse.de
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When running KVM on POWER, we allow the user to pass "-cpu POWERx" instead
of "-cpu host". This is achieved by patching the ppc_cpu_aliases[] array
so that "POWERx" points to the CPU class with the same PVR as the host CPU.
This causes CPUs to be instantiated from this CPU class instead of the
TYPE_HOST_POWERPC_CPU class which is used with "-cpu host". These CPUs thus
miss all the KVM specific tuning from kvmppc_host_cpu_class_init().
This currently causes QEMU with "-cpu POWER9" to fail when running KVM on a
POWER9 DD1 host:
qemu-system-ppc64: Register sync failed... If you're using kvm-hv.ko, only
"-cpu host" is possible
kvm_init_vcpu failed: Invalid argument
Let's have the "POWERx" alias to point to TYPE_HOST_POWERPC_CPU directly,
so that "-cpu POWERx" instantiates CPUs from the same class as "-cpu host".
Signed-off-by: Greg Kurz <groug@kaod.org>
Tested-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
In target/ppc/mmu-hash64.c there already exists the function
ppc_hash64_get_phys_page_debug() to get the physical (real) address for
a given effective address in hash mode.
Implement the function ppc_radix64_get_phys_page_debug() to allow a real
address to be obtained for a given effective address in radix mode.
This is used when a debugger is attached to qemu.
Previously we just had a comment saying this is unimplemented which then
fell through to the default case and caused an abort due to
unrecognised mmu model as the default had no case for the V3 mmu, which
was misleading at best.
We reuse ppc_radix64_walk_tree() which is used by the radix fault
handler since the process of walking the radix tree is identical.
Reported-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The mmu-radix64.c file implements functions to enable the radix mmu
emulation in tcg mode. There is a function ppc_radix64_walk_tree() which
performs the radix tree walk and also implicitly checks the pte
protection.
Move the protection checking of the pte from the ppc_radix64_walk_tree()
function into the caller. This means the ppc_radix64_walk_tree() function
can be used without protection checking which is useful for debugging.
ppc_radix64_walk_tree() no longer needs to take the rwx and prot variables.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Properly set the book E exception syndrome register when a floating
point exception occurs.
Currently on a book E processor, the POWERPC_EXCP_FP exception handler
fails to set "env->spr[SPR_BOOKE_ESR] = ESR_FP;" as required by the
book E specification.
Signed-off-by: Aaron Larson <alarson@ddci.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This patch is based on a similar patch from Stefan Hajnoczi -
commit c324fd0a39 ("virtio-pci: use ioeventfd even when KVM is disabled")
Do not check kvm_eventfds_enabled() when KVM is disabled since it
always returns 0. Since commit 8c56c1a592
("memory: emulate ioeventfd") it has been possible to use ioeventfds in
qtest or TCG mode.
This patch makes -device virtio-scsi-ccw,iothread=iothread0 work even
when KVM is disabled.
Currently we don't have an equivalent to "memory: emulate ioeventfd"
for ccw yet, but that this doesn't hurt and qemu-iotests 068 can pass with
skipping iothread arguments.
I have tested that virtio-scsi-ccw works under tcg both with and without
iothread.
This patch fixes qemu-iotests 068, which was accidentally merged early
despite the dependency on ioeventfd.
Signed-off-by: QingFeng Hao <haoqf@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Message-Id: <20170704132350.11874-2-haoqf@linux.vnet.ibm.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
The response for query-cpu-definitions didn't include the
unavailable-features field, which is used by libvirt to figure
out whether a certain cpu model is usable on the host.
The unavailable features are now computed by obtaining the host CPU
model and comparing it against the known CPU models. The comparison
takes into account the generation, the GA level and the feature
bitmaps. In the case of a CPU generation/GA level mismatch
a feature called "type" is reported to be missing.
As a result, the output of virsh domcapabilities would change
from something like
...
<mode name='custom' supported='yes'>
<model usable='unknown'>z10EC-base</model>
<model usable='unknown'>z9EC-base</model>
<model usable='unknown'>z196.2-base</model>
<model usable='unknown'>z900-base</model>
<model usable='unknown'>z990</model>
...
to
...
<mode name='custom' supported='yes'>
<model usable='yes'>z10EC-base</model>
<model usable='yes'>z9EC-base</model>
<model usable='no'>z196.2-base</model>
<model usable='yes'>z900-base</model>
<model usable='yes'>z990</model>
...
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
Message-Id: <1499082529-16970-1-git-send-email-mihajlov@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Acked-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Add the CONFIG_TCG for frontend and backend's files in the related
Makefiles.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add the tcg_enabled() where the x86 target needs to disable
TCG-specific code.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This function calls tlb_set_page_with_attrs, which is not available
when TCG is disabled. Move it to excp_helper.c.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Split the cpu_set_mxcsr() and make cpu_set_fpuc() inline with specific
tcg code.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Move cpu_get_fp80()/cpu_set_fp80() from fpu_helper.c to
machine.c because fpu_helper.c will be disabled if tcg is
disabled in the build.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Move cpu_sync_bndcs_hflags() function from mpx_helper.c
to helper.c because mpx_helper.c need be disabled when
tcg is disabled.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch pulls out of kvm.c and into the new files the implementation
for the xsave and xrstor instructions. This so they can be shared by
kvm and hvf.
Signed-off-by: Sergio Andres Gomez Del Real <Sergio.G.DelReal@gmail.com>
Message-Id: <20170626200832.11058-1-Sergio.G.DelReal@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Sergio Andres Gomez Del Real <sergio.g.delreal@gmail.com>
pc.h and sysemu/kvm.h are also included from common code (where
CONFIG_KVM is not available), so the #defines that depend on CONFIG_KVM
should not be declared here to avoid that anybody is using them in a
wrong way. Since we're also going to poison CONFIG_KVM for common code,
let's move them to kvm_i386.h instead. Most of the dummy definitions
from sysemu/kvm.h are also unused since the code that uses them is
only compiled for CONFIG_KVM (e.g. target/i386/kvm.c), so the unused
defines are also simply dropped here instead of being moved.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1498454578-18709-3-git-send-email-thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Move the handling of conforming code segments before the handling
of stack switch.
Because dpl == cpl after the new "if", it's now unnecessary to check
the C bit when testing dpl < cpl. Furthermore, dpl > cpl is checked
slightly above the modified code, so the final "else" is unreachable
and we can remove it.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In do_interrupt64(), when interrupt stack table(ist) is enabled
and the the target code segment is conforming(e2 & DESC_C_MASK), the
old implementation always set new CPL to 0, and SS.RPL to 0.
This is incorrect for when CPL3 code access a CPL0 conforming code
segment, the CPL should remain unchanged. Otherwise higher privileged
code can be compromised.
The patch fix this for always set dpl = cpl when the target code segment
is conforming, and modify the last parameter `flags`, which contains
correct new CPL, in cpu_x86_load_seg_cache().
Signed-off-by: Wu Xiang <willx8@gmail.com>
Message-Id: <20170621142152.GA18094@wxdeubuntu.ipads-lab.se.sjtu.edu.cn>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch simply replaces the separate boolean field in CPUState that
kvm, hax (and upcoming hvf) have for keeping track of vcpu dirtiness
with a single shared field.
Signed-off-by: Sergio Andres Gomez Del Real <Sergio.G.DelReal@gmail.com>
Message-Id: <20170618191101.3457-1-Sergio.G.DelReal@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add braces around if-statements.
No functional change.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Use extract32 instead of opencoding the shifting and masking.
No functional change.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Use bool instead of unsigned int to represent flags.
No functional change.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Introduce a use-pcmp-instr property making pcmp instructions
optional.
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Introduce a use-msr-instr property making msr instructions
optional.
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Introduce a use-div property making multiplication instructions
optional.
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Introduce a use-div property making division instructions
optional.
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Introduce a use-barrel property making barrel shifter instructions
optional.
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Add CPU versions 9.4, 9.5 and 9.6.
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Don't hard code 0xb as initial MB version.
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Correct bit shift for the PVR0 version field.
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
If ppc_cpu_realizefn() fails after cpu_exec_realizefn() has been
called, we will have to undo whatever cpu_exec_realizefn() did
by explicitly calling cpu_exec_unrealizeffn() which is currently
missing. Failure to do this proper cleanup will result in CPU
which was never fully realized to linger on the cpus list causing
SIGSEGV later (for eg when running "info cpus").
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The mmu fault handler should return 0 if it was able to successfully
handle the fault and a positive value otherwise.
Currently the tcg radix mmu fault handler will return 1 after
successfully handling a fault in virtual mode. This is incorrect
so fix it so that it returns 0 in this case.
The handler already correctly returns 0 when a fault was handled
in real mode and 1 if an interrupt was generated.
Fixes: d5fee0bbe6 ("target/ppc: Implement ISA V3.00 radix page fault handler")
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Since the introduction of MTTCG, using the msgsnd instruction
abort()s if being called without holding the BQL. So let's protect
that part of the code now with qemu_mutex_lock_iothread().
Buglink: https://bugs.launchpad.net/qemu/+bug/1694998
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Migrating between different CPU versions is a bit complicated for ppc.
A long time ago, we ensured identical CPU versions at either end by
checking the PVR had the same value. However, this breaks under KVM
HV, because we always have to use the host's PVR - it's not
virtualized. That would mean we couldn't migrate between hosts with
different PVRs, even if the CPUs are close enough to compatible in
practice (sometimes identical cores with different surrounding logic
have different PVRs, so this happens in practice quite often).
So, we removed the PVR check, but instead checked that several flags
indicating supported instructions matched. This turns out to be a bad
idea, because those instruction masks are not architected information, but
essentially a TCG implementation detail. So changes to qemu internal CPU
modelling can break migration - this happened between qemu-2.6 and
qemu-2.7. That was addressed by 146c11f1 "target-ppc: Allow eventual
removal of old migration mistakes".
Now, verification of CPU compatibility across a migration basically doesn't
happen. We simply ignore the PVR of the incoming migration, and hope the
cpu on the destination is close enough to work.
Now that we've cleaned up handling of processor compatibility modes
for pseries machine type, we can do better. For new machine types
(pseries-2.10+) We allow migration if:
* The source and destination PVRs are for the same type of CPU, as
determined by CPU class's pvr_match function
OR * When the source was in a compatibility mode, and the destination CPU
supports the same compatibility mode
For older machine types we retain the existing behaviour - current CAS
code will usually set a compat mode which would break backwards
migration if we made them use the new behaviour. [Fixed from an
earlier version by Greg Kurz].
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Tested-by: Andrea Bolognani <abologna@redhat.com>
Server class POWER CPUs have a "compat" property, which is used to set the
backwards compatibility mode for the processor. However, this only makes
sense for machine types which don't give the guest access to hypervisor
privilege - otherwise the compatibility level is under the guest's control.
To reflect this, this removes the CPU 'compat' property and instead
creates a 'max-cpu-compat' property on the pseries machine. Strictly
speaking this breaks compatibility, but AFAIK the 'compat' option was
never (directly) used with -device or device_add.
The option was used with -cpu. So, to maintain compatibility, this
patch adds a hack to the cpu option parsing to strip out any compat
options supplied with -cpu and set them on the machine property
instead of the now deprecated cpu property.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Tested-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Tested-by: Greg Kurz <groug@kaod.org>
Tested-by: Andrea Bolognani <abologna@redhat.com>
Add fsabs, fdabs, fsneg, fdneg, fsmove and fdmove.
The value is converted using the new floatx80_round() function.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170628204241.32106-7-laurent@vivier.eu>
fsglmul and fsgldiv truncate data to single precision before computing
results.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170628204241.32106-6-laurent@vivier.eu>
fmovecr moves a floating point constant from the
FPU ROM to a floating point register.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20170628204241.32106-3-laurent@vivier.eu>
use DisasCompare with FPU conditions in fscc and fbcc.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170628204241.32106-2-laurent@vivier.eu>
In some cases a failing VMSTATE_*_EQUAL does not mean we detected a bug,
but it's actually the best we can do. Especially in these cases a verbose
error message is required.
Let's introduce infrastructure for specifying a error hint to be used if
equal check fails. Let's do this by adding a parameter to the _EQUAL
macros called _err_hint. Also change all current users to pass NULL as
last parameter so nothing changes for them.
Signed-off-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Message-Id: <20170623144823.42936-1-pasic@linux.vnet.ibm.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Let's keep it very simple for now and flush the complete tlb,
we currently can't find the right entries in our tlb, we would have
to store the used tables for each element.
As we now fully implement the DAT-enhancement facility, we can allow to
enable it for the qemu CPU model.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170622094151.28633-4-david@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
If only the page index is set, most likely we don't have a valid
virtual address. Let's do a full tlb flush for that case.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170622094151.28633-3-david@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Let's allow to enable it for the qemu cpu model and correctly emulate
it.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170622094151.28633-2-david@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Most of the PSW bits that were being copied into TB->flags
are not relevant to translation. Removing those that are
unnecessary reduces the amount of translation required.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Missed the proper alignment in TRTO/TRTT, and ignoring the M3
field for all TRXX insns without ETF2-ENH.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This facility bit includes execution-hint, load-and-trap,
miscellaneous-instruction-extensions and processor-assist.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This facility bit includes load-on-condition-2 and
load-and-zero-rightmost-byte.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This facility bit includes DFP-rounding, FPR-GR-transfer,
FPS-sign-handling, and IEEE-exception-simulation. We do
support all of these.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This adds support for the MOVE WITH OPTIONAL SPECIFICATIONS (MVCOS)
instruction. Allow to enable it for the qemu cpu model using
qemu-system-s390x ... -cpu qemu,mvcos=on ...
This allows to boot linux kernel that uses it for uacccess.
We are missing (as for most other part) low address protection checks,
PSW key / storage key checks and support for AR-mode.
We fake an ADDRESSING exception when called from problem state (which
seems to rely on PSW key checks to be in place) and if AR-mode is used.
user mode will always see a PRIVILEDGED exception.
This patch is based on an original patch by Miroslav Benes (thanks!).
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170614133819.18480-3-david@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Such shifts are usually used to easily extract the PSW KEY from the PSW
mask, so let's avoid the confusing offset of 4.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170614133819.18480-2-david@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The FAC_ names were placeholders prior to the introduction
of the current facility modeling.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJZSRWrAAoJEDhwtADrkYZTVOQP/RK8br2A1Cn7LeVG6jnKz5hJ
OqyII77x8I2RachWvnwxQeHPMEVDuz3y2WjL+80s6peXlpR6y/13w7oX0f6aiJuo
+T9khTqMv2I7HsM5UCsXAJpFPHT7r90b4x8nstY80YLGe7lA7L6yk6PGyCxHThwA
mOiTKDw6/Xb/yZGrS2Favrun7juNpAs0Ec1IAkaA8xsEgVkd6tDv281rmHqvibl/
//90VfJp3nHFZ12FCQ1HzA42Eigtmo/fIk9LnAzBoYG0zw0cnzjuv0BNzs/JwuUZ
/VskeD1cViQ4yzFnPpjOavjYjTN854/JTJzm7gZ7dTQ6/l3ykoY6NDE8p1BLuHlC
p2RKkg20EeZlpOEtMQ4g6iyG6EUxaKcEiXmQ31LqN/LJwxTYbo5B5nCHMjrt4gxe
MqFBJQSNsJ7QjZ7Qa7pADMCi/G0m7/0dN8vBqSr4vcbLVvdbw/yb/9s33wXGrUj1
PyXM2ymi+vvSqcXtNXKshsJLxJSJxO1tm2tRIANDTabQ00yxs8dOYnQnbQFR94fp
6nrE2PnjZqgqk69aNDJEbngj6Tgx44nyTr1+Q17juZf9nTCE5QmBE1J0IRoykCJn
E8+T63ZxtIxVV2yLi5xBjmZaZtPyJRGGeUXunA10SuWrHzupEcBuhFhFYd2MFM5L
fsojALN2K3Gdx2+CmAo2
=O9Vv
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/armbru/tags/pull-qapi-2017-06-09-v2' into staging
QAPI patches for 2017-06-09
# gpg: Signature made Tue 20 Jun 2017 13:31:39 BST
# gpg: using RSA key 0x3870B400EB918653
# gpg: Good signature from "Markus Armbruster <armbru@redhat.com>"
# gpg: aka "Markus Armbruster <armbru@pond.sub.org>"
# Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653
* remotes/armbru/tags/pull-qapi-2017-06-09-v2: (41 commits)
tests/qdict: check more get_try_int() cases
console: use get_uint() for "head" property
i386/cpu: use get_uint() for "min-level"/"min-xlevel" properties
numa: use get_uint() for "size" property
pnv-core: use get_uint() for "core-pir" property
pvpanic: use get_uint() for "ioport" property
auxbus: use get_uint() for "addr" property
arm: use get_uint() for "mp-affinity" property
xen: use get_uint() for "max-ram-below-4g" property
pc: use get_uint() for "hpet-intcap" property
pc: use get_uint() for "apic-id" property
pc: use get_uint() for "iobase" property
acpi: use get_uint() for "pci-hole*" properties
acpi: use get_uint() for various acpi properties
acpi: use get_uint() for "acpi-pcihp-io*" properties
platform-bus: use get_uint() for "addr" property
bcm2835_fb: use {get, set}_uint() for "vcram-size" and "vcram-base"
aspeed: use {set, get}_uint() for "ram-size" property
pcihp: use get_uint() for "bsel" property
pc-dimm: make "size" property uint64
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Coldfire uses float64, but 680x0 use floatx80.
This patch introduces the use of floatx80 internally
and enables 680x0 80bits FPU.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170620205121.26515-4-laurent@vivier.eu>
on reset, set FP registers to NaN and control registers to 0
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170620205121.26515-3-laurent@vivier.eu>
Move code of fmove to/from control register to a function
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20170620205121.26515-2-laurent@vivier.eu>
These are properties of TYPE_X86_CPU, defined with DEFINE_PROP_UINT32()
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20170607163635.17635-40-marcandre.lureau@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
We would like to use a same QObject type to represent numbers, whether
they are int, uint, or floats. Getters will allow some compatibility
between the various types if the number fits other representations.
Add a few more tests while at it.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20170607163635.17635-7-marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
[parse_stats_intervals() simplified a bit, comment in
test_visitor_in_int_overflow() tidied up, suppress bogus warnings]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Exit to cpu loop so we reevaluate cpu_arm_hw_interrupts.
Tested-by: Emilio G. Cota <cota@braap.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Exit to cpu loop so we reevaluate cpu_s390x_hw_interrupts.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
While at it, drop the current_cpu assignment since this is a
per-thread variable on modern QEMU.
Cc: Vincent Palatin <vpalatin@chromium.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
V flag for subtraction is:
v = (res ^ src1) & (src1 ^ src2)
(see COMPUTE_CCR() in target/m68k/helper.c)
But gen_flush_flags() uses:
v = (res ^ src2) & (src1 ^ src2)
The problem has been found with the following program:
.global _start
_start:
move.l #-2147483648,%d0
subq.l #1,%d0
jvc 1f
move.l #1,%d1
move.l #1,%d0
trap #0
1:
move.l #0,%d1
move.l #1,%d0
trap #0
It works fine (exit(1)) on real hardware, and with "-singlestep".
"-singlestep" uses gen_helper_flush_flags(), whereas
without "-singlestep", V flag is computed directly in
gen_flush_flags().
This patch updates gen_flush_flags() to have the same result
as with gen_helper_flush_flags().
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170614203905.19657-1-laurent@vivier.eu>
Let's properly expose the CPU type (machine-type number) via "STORE CPU
ID" and "STORE SUBSYSTEM INFORMATION".
As TCG emulates basic mode, the CPU identification number has the format
"Annnnn", whereby A is the CPU address, and n are parts of the CPU serial
number (0 for us for now).
A specification exception will be injected if the address is not aligned
to a double word. Low address protection will not be checked as
we're missing some more general support for that.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170609133426.11447-3-david@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
We can tell from the program interrupt code, whether a program interrupt
has to forward the address in the PGM new PSW
(suppressing/terminated/completed) to point at the next instruction, or
if it is nullifying and the PSW address does not have to be incremented.
So let's not modify the PSW address outside of the injection path and
handle this internally. We just have to handle instruction length
auto detection if no valid instruction length can be provided.
This should fix various program interrupt injection paths, where the
PSW was not properly forwarded.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170609142156.18767-3-david@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170609142156.18767-2-david@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This batch contains more patches to rework the pseries machine hotplug
infrastructure, plus an assorted batch of bugfixes.
It contains a start on fixes to restore migration from older machine
types on older versions which was broken by some xics changes. There
are still a few missing pieces here, though.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABCAAGBQJZOjFrAAoJEGw4ysog2bOS7bUP/3CNnd/eP+NPwK96ZdiRpYF4
sYLyYweev7D+Lk9johJs7JzZwAsxlveGOMEFslUGXeG4aK9VPl8Xw/A9fJGE+epM
ixlcQRREyudHV5H7Pc1+gqy4tVPIwaUcfEYZ0osWlzXgAIzJPLWFxO8GNXpRKXD+
Kjscqy+dc7Rp07+Yta5qQvPnPQmz9gcTB+CeY7aGvjf6dkofnj7wGoHsfuy1qifX
jk4TTFY+5I4NTsI4H3F1DKpYkOwlUt3nSIVBeQI0eLeeVNZU4vJ6Uug6iMNiqmJ1
m+zaDmKVdninJKbGpG9wmaf3Z471WWGScsXGNSTIqWoQBfeUDutR1XCd+NlCmXyy
/CxgejhW96m06TIN3n0Unh5RCNNfP5UMxITgjmwoM4iN2EEJXoUGsVqS3oJdf2ct
wOiiSgCB9hMdV191jIPxjc/CAjuZtpJHIa4liEc3WwmUzoOXCs/vAyvEZe1UXB8/
BU3OUFdvtX6cuMWS0tbB9MM7wHR3I/ZRyWSSQW+e9m7Qq2eIcAw8zZLazFI6t5vf
qDL3dYulhu6bA9et7weCuapdZA8CDcpU2xA1+C6dxZxfSvDTCDUIdojO6InhGmpG
ual58ajW15zhUwNeDsY5WIHRe7F3TvsKXf95RYtIXvuoERsGwBYAMFT+gjS7c3M7
tEycIdLxXN/AsjP4v5wj
=a4tH
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-2.10-20170609' into staging
ppc patch queue 2017-06-09
This batch contains more patches to rework the pseries machine hotplug
infrastructure, plus an assorted batch of bugfixes.
It contains a start on fixes to restore migration from older machine
types on older versions which was broken by some xics changes. There
are still a few missing pieces here, though.
# gpg: Signature made Fri 09 Jun 2017 06:26:03 BST
# gpg: using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-2.10-20170609:
Revert "spapr: fix memory hot-unplugging"
xics: drop ICPStateClass::cpu_setup() handler
xics: setup cpu at realize time
xics: pass appropriate types to realize() handlers.
xics: introduce macros for ICP/ICS link properties
hw/cpu: core.c can be compiled as common object
hw/ppc/spapr: Adjust firmware name for PCI bridges
xics: add reset() handler to ICPStateClass
pnv_core: drop reference on ICPState object during CPU realization
spapr: Rework DRC name handling
spapr: Fold spapr_phb_{add,remove}_pci_device() into their only callers
spapr: Change DRC attach & detach methods to functions
spapr: Clean up handling of DR-indicator
spapr: Clean up RTAS set-indicator
spapr: Don't misuse DR-indicator in spapr_recover_pending_dimm_state()
spapr: Clean up DR entity sense handling
pseries: Correct panic behaviour for pseries machine type
spapr: fix memory leak in spapr_memory_pre_plug()
target/ppc: fix memory leak in kvmppc_is_mem_backend_page_size_ok()
target/ppc: pass const string to kvmppc_is_mem_backend_page_size_ok()
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The string returned by object_property_get_str() is dynamically allocated.
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This function has three implementations. Two are stubs that do nothing
and the third one only passes the obj_path argument to:
Object *object_resolve_path(const char *path, bool *ambiguous);
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
If the user set disable smm by '-machine smm=off', we
should not register smram_listener so that we can
avoid waster memory in kvm since the added sencond
address space.
Meanwhile we should assign value of the global kvm_state
before invoking the kvm_arch_init(), because
pc_machine_is_smm_enabled() may use it by kvm_has_mm().
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Message-Id: <1496316915-121196-1-git-send-email-arei.gonglei@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add an XML description for SSE registers (XMM+MXCSR) for both X86
and X86-64 architectures in the GDB stub:
- configure: Define gdb_xml_files for the X86 targets (32 and 64bit).
- gdb-xml/i386-32bit-sse.xml & gdb-xml/i386-64bit-sse.xml: The XML files
that contain a description of the XMM + MXCSR registers.
- gdb-xml/i386-32bit.xml & gdb-xml/i386-64bit.xml: wrappers that include
the XML file of the core registers and the other XML file of the SSE registers.
- target/i386/cpu.c: Modify the gdb_core_xml_file to the new XML wrapper,
modify the gdb_num_core_regs to fit the registers number defined in each
XML file.
Signed-off-by: Abdallah Bouassida <abdallah.bouassida@lauterbach.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This is a fix for the problem [1], where VMCB.CPL was set to 0 and interrupt
was taken on userspace stack. The root cause lies in the specific AMD CPU
behaviour which manifests itself as unusable segment attributes on SYSRET[2].
Here in this patch flags are not touched even segment is unusable or is not
present, therefore CPL (which is stored in DPL field) should not be lost and
will be successfully restored on kvm/svm kernel side.
Also current patch should not break desired behavior described in this commit:
4cae9c9796 ("target-i386: kvm: clear unusable segments' flags in migration")
since present bit will be dropped if segment is unusable or is not present.
This is the second part of the whole fix of the corresponding problem [1],
first part is related to kvm/svm kernel side and does exactly the same:
segment attributes are not zeroed out.
[1] Message id: CAJrWOzD6Xq==b-zYCDdFLgSRMPM-NkNuTSDFEtX=7MreT45i7Q@mail.gmail.com
[2] Message id: 5d120f358612d73fc909f5bfa47e7bd082db0af0.1429841474.git.luto@kernel.org
Signed-off-by: Roman Pen <roman.penyaev@profitbricks.com>
Signed-off-by: Mikhail Sennikovskii <mikhail.sennikovskii@profitbricks.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Michael Chapman <mike@very.puzzling.org>
Cc: qemu-devel@nongnu.org
Message-Id: <20170601085604.12980-1-roman.penyaev@profitbricks.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Running Windows with icount causes a crash in instruction of write cr.
This patch fixes it.
Reading and writing cr cause an icount read because there are called
cpu_get_apic_tpr and cpu_set_apic_tpr functions. So, there is need
gen_io_start()/gen_io_end() calls.
Signed-off-by: Mihail Abakumov <mikhail.abakumov@ispras.ru>
Message-Id: <ffb376034ff184f2fcbe93d5317d9e76@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This speeds up SMM switches. Later on it may remove the need to take
the BQL, and it may also allow to reuse code between TCG and KVM.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Ignore env->a20_mask when running in system management mode.
Reported-by: Anthony Xu <anthony.xu@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <1494502528-12670-1-git-send-email-pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add "Return and Deallocate" (rtd) instruction.
RTD #d
(SP) -> PC
SP + 4 + d -> SP
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Tested-By: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
Message-Id: <20170605100014.22981-1-laurent@vivier.eu>
We have to make the address in the old PSW point at the next
instruction, as addressing exceptions are suppressing and not
nullifying.
I assume that there are a lot of other broken cases (as most instructions
we care about are suppressing) - all trigger_pgm_exception() specifying
and explicit number or ILEN_LATER look suspicious, however this is another
story that might require bigger changes (and I have to understand when
the address might already have been incremented first).
This is needed to make an upcoming kvm-unit-test work.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170529121228.2789-1-david@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The CDSG instruction requires a 16-byte alignement, as expressed in
the MO_ALIGN_16 passed to helper_atomic_cmpxchgo_be_mmu. In the non
parallel case, use check_alignment to enforce this.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <20170604202034.16615-4-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Use a common helper with PACK ASCII as the differences are limited to
the stride of the source operand.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <20170531220129.27724-25-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
For that we need to make program_interrupt available to qemu-user.
Fortunately there is almost nothing to change as both kvm_enabled and
CONFIG_KVM evaluate to false in that case.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <20170531220129.27724-22-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
As MVCL and MVCLE only differ by their operands, use a common
do_mvcl helper. Optimize it calling fast_memmove and fast_memset.
Correctly write back addresses. Check that r1 and r2/r3 registers
are even.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <20170531220129.27724-21-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
adj_len_to_page doesn't return the correct result when the address
is already page aligned and the length is bigger than a page. Fix that.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <20170531220129.27724-20-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
As CLCL and CLCLE mostly differ by their operands, use a common do_clcl
helper. Another difference is that CLCL is not interruptible.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <20170531220129.27724-19-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
There are multiple issues with the COMPARE LOGICAL LONG EXTENDED
instruction:
- The test between the two operands is inverted, leading to an inversion
of the cc values 1 and 2.
- The address and length of an operand continue to be decreased after
reaching the end of this operand. These values are then wrong write
back to the registers.
- We should limit the amount of bytes to process, so that interrupts can
be served correctly.
At the same time rename dest into src1 and src into src3 to match the
operand names and make the code less confusing.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <20170531220129.27724-18-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Improve fix_address to also handle the 24-bit mode. Rename fix_address
to wrap_address to better explain what is changed.
Replace the calls to get_address with x2 = 0 and b2 = 0 by
call to wrap_address, leading to the removal of this function. Rename
get_address_31fix into get_address.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <20170531220129.27724-15-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
These functions differ from COMPARE by generating an exception for a
QNaN input. Use the non quiet version of floatXX_compare.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <20170531220129.27724-10-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
And at the same time make IPTE SMP aware.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <20170531220129.27724-4-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Currently we only present the plain z900 feature bits to the guest,
but QEMU already emulates some additional features (but not all of
the next CPU generation, so we can not use the next CPU level as
default yet). Since newer Linux kernels are checking the feature bits
and refuse to work if a required feature is missing, it would be nice
to have a way to present more of the supported features when we are
running with the "qemu" CPU.
This patch now adds the supported features to the "full_feat" bitmap,
so that additional features can be enabled on the command line now,
for example with:
qemu-system-s390x -cpu qemu,stfle=true,ldisp=true,eimm=true,stckf=true
Acked-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1495704132-5675-1-git-send-email-thuth@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
While the previous patch is required for proper conformance,
the vast majority of target insns are MVC and XC for implementing
memmove and memset respectively. The next most common are CLC,
TR, and SVC.
Implementing these (and a few others for which we already have
an implementation) directly is faster than going through full
translation to a TB.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Previously, helper_ex would construct the insn and then implement
the insn via direct calls other helpers. This was sufficient to
boot Linux but that is all.
It is easy enough to go the whole nine yards by stashing state for
EXECUTE within the cpu, and then rely on a new TB to be created
that properly and completely interprets the insn.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This split will be required for implementing EXECUTE properly.
Do this now as a separate step to aid comparison of before and
after TB listings.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Use this saved value instead of recomputing from next_pc difference.
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Also provide the cross-cpu tlb flushing required by the PoO.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The PoO specifies that when R1==0, no ORing into the insn
loaded from storage takes place. Load a zero for this case.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
(1) The OR of the low bits or R1 into INSN were not being done
consistently; it was forgotten along all but the SVC path.
(2) The setting of ILEN was wrong on SVC path for EXRL.
(3) The data load for ICM read too much.
Fix these by consolidating data load at the beginning, using
get_ilen to control the number of bytes loaded, and ORing in
the byte from R1. Use extract64 from the full aligned insn
to extract arguments.
Pass in ILEN rather than RET as the more natural way to give
the required data along the SVC path.
Modify ENV->CC_OP directly rather than include it in the
functional interface.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Fix saving exception_index around mmu_translate; eliminate a dead store.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This will avoid needing forward declarations in following patches.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
TEST BLOCK was likely once used to execute basic memory
tests, but nowadays it's just a (slow) way to clear a page.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1495128400-23759-1-git-send-email-thuth@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
It's possible that one device kept its irqfd/virq there even when
MSI/MSIX was disabled globally for that device. One example is
virtio-net-pci (see commit f1d0f15a6 and virtio_pci_vq_vector_mask()).
It is used as a fast path to avoid allocate/release irqfd/virq
frequently when guest enables/disables MSIX.
However, this fast path brought a problem to msi_route_list, that the
device MSIRouteEntry is still dangling there even if MSIX disabled -
then we cannot know which message to fetch, even if we can, the messages
are meaningless. In this case, we can just simply ignore this entry.
It's safe, since when MSIX is enabled again, we'll rebuild them no
matter what.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1448813
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <1494309644-18743-4-git-send-email-peterx@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Accumulated patches for ppc targets and the pseries machine type.
The big thing in this batch is a start on a substantial cleanup of the
pseries hotplug mechanisms, which were pretty confusing. For now
these shouldn't cause substantial behavioural changes, but I am hoping
these lead to clearer code and eventually to fixes for the bugs we
have in hotplug handling, particularly when hotplug and migration are
combined.
The remaining patches are mostly bugfixes.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABCAAGBQJZNhgSAAoJEGw4ysog2bOSERwP/A7T7UJ8XXWit9QXGCi+G83w
+RUuHxjA9qFqrg1zYqFyLg3ctGl93Sxu7mzI5MOIKwAVXlTsE6+84TH7zBc18DPB
fekPWmzJ6jfiVO+1Zg1JPWorMfIHDDc2v6Q6qPfD8KWbt02yPfrXbKlivQB4hVZ4
Qb4VJdjZgBDcVy79xhcW5k6v8dVw8PdSyDmkQrBhccI0noLerhI41Mgt7QQaWQRH
Le3ziexUpWelVCRQB0FqE/PIWo2+NY/e0pumX7Aqtjs/G35KjOXy0ja3yKLjfeUW
Z4NugIO2I2hncERa68YFar/BqG26DX8KCErNMDkn7LyZcoDAQWhcDH+65G1BNuf2
jW+KApMNm+N1vXabbz8P9BbLjuZpRQQhyPOxB3I8UGaTYGtCPe/lUCe2/V8EbKNa
VFavc1UuLftOZuJj/rYGJeU/4JBU6srbAKCO3VVK4Tnd8DyiT3QCpUWEkjv+J6jo
co35oYBavLfQPMr+rsX15lgbmZwg7iBV+dgKLa2+cwmKXzCf7aYe38aJy7nRBmhb
ivhH3bKtdysy0qq4UYaCgW06qQcVF0QMJaxFQ0X7I+GBNwHA7wdZD/i6IMcO6Z7H
7gQdavBTdukgKb2+pVjR58H13ieHXuBxktonhOz70rvEDVa4xx8pxhnZlpSiH2ha
RzpkhanrwEeECG6Lke/3
=QDWB
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-2.10-20170606' into staging
ppc patch queue 2017-06-06
Accumulated patches for ppc targets and the pseries machine type.
The big thing in this batch is a start on a substantial cleanup of the
pseries hotplug mechanisms, which were pretty confusing. For now
these shouldn't cause substantial behavioural changes, but I am hoping
these lead to clearer code and eventually to fixes for the bugs we
have in hotplug handling, particularly when hotplug and migration are
combined.
The remaining patches are mostly bugfixes.
# gpg: Signature made Tue 06 Jun 2017 03:48:50 BST
# gpg: using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-2.10-20170606:
spapr: Remove some non-useful properties on DRC objects
spapr: Eliminate spapr_drc_get_type_str()
spapr: Move configure-connector state into DRC
spapr: Clean up spapr_dr_connector_by_*()
spapr: Introduce DRC subclasses
spapr/drc: don't migrate DRC of cold-plugged CPUs and LMBs
spapr: Allow boot from vhost-*-scsi backends
ppc/pnv: check the return value of fdt_setprop()
spapr_nvram: Check return value from blk_getlength()
target/ppc: Fixup set_spr error in h_register_process_table
target-ppc: Fix openpic timer read register offset
spapr: Make DRC get_index and get_type methods into plain functions
spapr: Abolish DRC set_configured method
spapr: Abolish DRC get_fdt method
spapr: Move DRC RTAS calls into spapr_drc.c
migration: Mark CPU states dirty before incoming migration/loadvm
migration: remove register_savevm()
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Xtensa cores may have registers of types/sizes not supported by the
gdbstub accessors. Ignore writes to such registers and return zero on
read, but always return correct register size, so that gdb on the other
side is able to access all registers in the packet holding unsupported
registers in the middle. This fixes gdb interaction with cores that have
vector/custom TIE registers.
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
In semihosting mode QEMU allows guest to read and write host file
descriptors directly, including descriptors 0..2, a.k.a. stdin, stdout
and stderr. Sometimes it's desirable to have semihosting console
controlled by -serial option, e.g. to connect it to network.
Add semihosting console to xtensa-semi.c, open it in the 'sim' machine
in the presence of -serial option and direct stdout and stderr to it
when it's present.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Return value of read/write simcalls is not calculated correctly in case
of operations crossing page boundary and in case of short reads/writes.
Read and write simcalls should return the size of data actually
read/written or -1 in case of error.
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Read and write simcalls map physical memory to access I/O buffers, but
'read' simcall need to map it for writing and 'write' simcall need to
map it for reading, i.e. the opposite of what they do now. Fix that.
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
-----BEGIN PGP SIGNATURE-----
iQIcBAABCAAGBQJZNam5AAoJECgHk2+YTcWmiNAP/1yGqibhujtVTviYpbHbVOBx
wsHkeNOqb1WUXw0bfGw8RfGTlcuoBaeL7r88iwFL8G303g2OPiS1H/tcRuT7gfG0
YV3xdpGYMDFBu1JMV2VSnCqWXfa92EbPJ0vRLjxTD/heLmekVA7TUdiZVBf+S7hK
fQLWqzZboV7RFDm6OUBQOxjCU8/WJ7ggShQJhItzBJTIZJA2C2iiO07v+U04Cwku
Z0eoiwXTMnjDhvKLh8AE5jO3KLCrxGT6u9u9szXMwUtQUDX14X2U5PFCAB89mhUZ
bYW3rRvpsU9eDMQVUo92Lej0e+47T0Mb4R7F9vjWsHwTI+VgcO+K0DXlYru0uKOJ
XLoZVtGls3nRuJIDrMsICCkuveulGZs98YlVcjGjzdfJ748P6FpEQmL9v6WiExHi
G8lu0tP2nW4n1DU+1p4EMQcKWueKuN/p7OhCWGGFvNDeGSvm1e8//TITmbtMZ2/E
PizmCW5YQSGPOGg7fq4C3RhLfkQj4gsESe1lHdWsgSOZd9KYmJWg256BNInroky+
zb8XYts7/i2ogKtj8c9YV8jwvbiHjAYVcO4mr9GNFERO1FSdPbNKuVm2IldChLIt
trI4vngvTIygTcURA7s+cOXFRAnznHrHYl+QH9XQJqI2Ay/+3nGYY+/EYFpCc0EW
l2f/b2ZChRU/UlchdleG
=0C6J
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/ehabkost/tags/x86-and-machine-pull-request' into staging
x86 and machine queue, 2017-06-05
# gpg: Signature made Mon 05 Jun 2017 19:58:01 BST
# gpg: using RSA key 0x2807936F984DC5A6
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>"
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6
* remotes/ehabkost/tags/x86-and-machine-pull-request:
scripts: Test script to look for -device crashes
qemu.py: Add QEMUMachine.exitcode() method
qemu.py: Don't set _popen=None on error/shutdown
spapr: cleanup spapr_fixup_cpu_numa_dt() usage
numa: move numa_node from CPUState into target specific classes
numa: make hmp 'info numa' fetch numa nodes from qmp_query_cpus() result
numa: make sure that all cpus have has_node_id set if numa is enabled
numa: move default mapping init to machine
numa: consolidate cpu_preplug fixups/checks for pc/arm/spapr
pc: Use "min-[x]level" on compat_props
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Currently, under z/VM on a 0x2827, QEMU will detect a 0x2828 if no
IBC value is provided. QEMU will simply take the last model of that HW
generation, which happens to be the BC version.
Let's improve our search for that case by selecting the latest CPU
definition that matches the CPU type. This for example will avoid
detecting an z13 as a z13s.
We might still detect a GA2 version on a GA1 system, but as we don't
have further information at hand, there isn't too much we can do about
it. The alternative of always presenting the oldest GA is not backward
compatible, e.g:
You're running on 0x2827 GA2.
Old QEMU version indicated "0x2828 GA1 == 0x2827 GA2". After you updated
QEMU, you suddenly detect "0x2827 GA1". You're previous libvirt guest
might suddenly refuse to run.
In the end presenting a newer GA level does not matter because:
1: All GAX models share the same base feature set. A GAX++ might
support "more features".
2: Without an IBC, the guest can't detect the GA version.
If we have no IBC (esp. unblocked_ibc == 0), the IBC we will present
to the guest in read_SCP_info() will be 0. The guest will not know
which GA version it has. The problem of missing IBC propagates.
If we don't have a feature of the GA++ version, also our guest won't
have it. So in summary, the guest also has no idea of its GA version.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170531193434.6918-3-david@redhat.com>
Acked-by: Jason J. Herne <jjherne@linux.vnet.ibm.com>
Reviewed-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
[improve patch description by reusing mailing list discussion]
Let's also properly forward that bit. It should always be set. I
verified it under z/VM, it seems to be always set there. For now,
zKVM guests never get that bit set when the CPU model is active.
The PoP mentiones, that z800 + z900 (HW generation 7) always set this
bit to 0, so let's take care of that.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170531193434.6918-2-david@redhat.com>
Acked-by: Jason J. Herne <jjherne@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
MIDA (modified indirect data addressing) is an optional facility, and
we (currently) don't support it. Let's post an operand exception if
the guest tries to set it in the orb and a channel program check
if it is set in a ccw, as specified in the Principles of Operation.
Reviewed-by: Claudio Imbrenda <imbrenda@linux.vnet.ibm.com>
Reviewed-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
As a rule, CPU internal state should never be updated when
!cpu->kvm_vcpu_dirty (or the HAX equivalent). If that is done, then
subsequent calls to cpu_synchronize_state() - usually safe and idempotent -
will clobber state.
However, we routinely do this during a loadvm or incoming migration.
Usually this is called shortly after a reset, which will clear all the cpu
dirty flags with cpu_synchronize_all_post_reset(). Nothing is expected
to set the dirty flags again before the cpu state is loaded from the
incoming stream.
This means that it isn't safe to call cpu_synchronize_state() from a
post_load handler, which is non-obvious and potentially inconvenient.
We could cpu_synchronize_all_state() before the loadvm, but that would be
overkill since a) we expect the state to already be synchronized from the
reset and b) we expect to completely rewrite the state with a call to
cpu_synchronize_all_post_init() at the end of qemu_loadvm_state().
To clear this up, this patch introduces cpu_synchronize_pre_loadvm() and
associated helpers, which simply marks the cpu state as dirty without
actually changing anything. i.e. it says we want to discard any existing
KVM (or HAX) state and replace it with what we're going to load.
Cc: Juan Quintela <quintela@redhat.com>
Cc: Dave Gilbert <dgilbert@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Move vcpu's associated numa_node field out of generic CPUState
into inherited classes that actually care about cpu<->numa mapping,
i.e: ARMCPU, PowerPCCPU, X86CPU.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <1496161442-96665-6-git-send-email-imammedo@redhat.com>
[ehabkost: s/CPU is belonging to/CPU belongs to/ on comments]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Instead of unconditionally exiting to the exec loop, use the
gen_jr helper to jump to the target if it is valid.
Perf impact: see next commit's log.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1493263764-18657-10-git-send-email-cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This helper will be used by subsequent changes.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1493263764-18657-9-git-send-email-cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Instead of unconditionally exiting to the exec loop, use the
lookup_and_goto_ptr helper to jump to the target if it is valid.
Perf impact: see next commit's log.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1493263764-18657-7-git-send-email-cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Avoid a "cast from pointer to integer of different size" warning
by using the proper host type.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The cp15, CRn=15, opc1=0, CRm=5, opc2=0 instruction invalidates all the
data cache on the cortex-r5. Implementing it as a NOP.
Signed-off-by: Luc MICHEL <luc.michel@git.antfield.fr>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
-----BEGIN PGP SIGNATURE-----
iQIcBAABCAAGBQJZMEX4AAoJEPSH7xhYctcjWU0QAOMBVLJSA8gu4n+//EAGCFfd
0Rt+Ba8RNT/R3SqFauqVCFNnZ2gIiNfeKoZwZMtmm8QUlGAq8R5eovKFIevjnhbT
b0r0HnOE3TxVAn/UvjcqvMrDzIl/PyWD+2JONVUmY0QR4+U8jiTJBtFl1hRK+fxC
9y3OLBOU3bEo5a7ou6n9ig/uo5wbt/gAXMGOWqOdTtnP5Qs3bq3ONDpk8BjB9WEr
tV1OmvcrckG9Es2HJObsJhhYViaBccgKEL+srO4KQf1FmCZj78cAkxxoWVW6w5Qx
UsQZ4mpfLBN1EH2dLh+2FjS99hu/ToHL4nKM46oFWicxgeD4HdG5V7zXfKFcQIIJ
RftDqro8ycYHgnK+EAVbtI8yXnhvprSOWlJIKkNMDE+uwdl3nYFLnU91WYJnAMJk
M3yAvZVy5x8rHvA2HNisjqLVUK8+mAv53D6tA/mz2FbMUq+CdH9Xc0wcWZiyVcvb
nxNcDRe8+Pas4utf1GUBlE6oDn1KHPz3H1/iXV/tLtsvN+r1fI0GBjb/ogOLQr+X
sjomRpVrjc3B9mQkZBn0ShqGSzsDVXd6/wUWe0HJ3JYHWYcdGqoYXXRZHD6x8Kai
KO+YRo4lREPvWIQB0C36YbQuAsoB+tpa5z+/iEmqdqA4tvMXVAozKIt5dbiI7sAP
EAv1Zk6bp4rTpYaz9MXX
=tEkM
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/juanquintela/tags/migration/20170601' into staging
migration/next for 20170601
# gpg: Signature made Thu 01 Jun 2017 17:51:04 BST
# gpg: using RSA key 0xF487EF185872D723
# gpg: Good signature from "Juan Quintela <quintela@redhat.com>"
# gpg: aka "Juan Quintela <quintela@trasno.org>"
# Primary key fingerprint: 1899 FF8E DEBF 58CC EE03 4B82 F487 EF18 5872 D723
* remotes/juanquintela/tags/migration/20170601:
migration: Move include/migration/block.h into migration/
migration: Export ram.c functions in its own file
migration: Create include for migration snapshots
migration: Export rdma.c functions in its own file
migration: Export tls.c functions in its own file
migration: Export socket.c functions in its own file
migration: Export fd.c functions in its own file
migration: Export exec.c functions in its own file
migration: Split qemu-file.h
migration: Remove unneeded includes of migration/vmstate.h
migration: shut src return path unconditionally
migration: fix leak of src file on dst
migration: Remove section_id parameter from vmstate_load
migration: loadvm handlers are not used
migration: Use savevm_handlers instead of loadvm copy
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Implement HFNMIENA support for the M profile MPU. This bit controls
whether the MPU is treated as enabled when executing at execution
priorities of less than zero (in NMI, HardFault or with the FAULTMASK
bit set).
Doing this requires us to use a different MMU index for "running
at execution priority < 0", because we will have different
access permissions for that case versus the normal case.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1493122030-32191-14-git-send-email-peter.maydell@linaro.org
The M series MPU is almost the same as the already implemented R
profile MPU (v7 PMSA). So all we need to implement here is the MPU
register interface in the system register space.
This implementation has the same restriction as the R profile MPU
that it doesn't permit regions to be sized down smaller than 1K.
We also do not yet implement support for MPU_CTRL.HFNMIENA; this
bit should if zero disable use of the MPU when running HardFault,
NMI or with FAULTMASK set to 1 (ie at an execution priority of
less than zero) -- if the MPU is enabled we don't treat these
cases any differently.
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
Message-id: 1493122030-32191-13-git-send-email-peter.maydell@linaro.org
[PMM: Keep all the bits in mpu_ctrl field, rather than
using SCTLR bits for them; drop broken HFNMIENA support;
various cleanup]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
General logic is that operations stopped by the MPU are MemManage,
and those which go through the MPU and are caught by the unassigned
handle are BusFault. Distinguish these by looking at the
exception.fsr values, and set the CFSR bits and (if appropriate)
fill in the BFAR or MMFAR with the exception address.
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
Message-id: 1493122030-32191-12-git-send-email-peter.maydell@linaro.org
[PMM: i-side faults do not set BFAR/MMFAR, only d-side;
added some CPU_LOG_INT logging]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
All M profile CPUs are PMSA, so set the feature bit.
(We haven't actually implemented the M profile MPU register
interface yet, but setting this feature bit gives us closer
to correct behaviour for the MPU-disabled case.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1493122030-32191-11-git-send-email-peter.maydell@linaro.org
Add support for the M profile default memory map which is used
if the MPU is not present or disabled.
The main differences in behaviour from implementing this
correctly are that we set the PAGE_EXEC attribute on
the right regions of memory, such that device regions
are not executable.
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
Message-id: 1493122030-32191-10-git-send-email-peter.maydell@linaro.org
[PMM: rephrased comment and commit message; don't mark
the flash memory region as not-writable; list all
the cases in the default map explicitly rather than
using a 'default' case for the non-executable regions]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Improve the "-d mmu" tracing for the PMSAv7 MPU translation
process as an aid in debugging guest MPU configurations:
* fix a missing newline for a guest-error log
* report the region number with guest-error or unimp
logs of bad region register values
* add a log message for the overall result of the lookup
* print "0x" prefix for hex values
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1493122030-32191-9-git-send-email-peter.maydell@linaro.org
[PMM: a little tidyup, report region number in all messages
rather than just one]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Now that we enforce both:
* pmsav7_dregion == 0 implies has_mpu == false
* PMSA with has_mpu == false means SCTLR.M cannot be set
we can remove a check on pmsav7_dregion from get_phys_addr_pmsav7(),
because we can only reach this code path if the MPU is enabled
(and so region_translation_disabled() returned false).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1493122030-32191-8-git-send-email-peter.maydell@linaro.org
If the CPU is a PMSA config with no MPU implemented, then the
SCTLR.M bit should be RAZ/WI, so that the guest can never
turn on the non-existent MPU.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1493122030-32191-7-git-send-email-peter.maydell@linaro.org
Fix the handling of QOM properties for PMSA CPUs with no MPU:
Allow no-MPU to be specified by either:
* has-mpu = false
* pmsav7_dregion = 0
and make setting one imply the other. Don't clear the PMSA
feature bit in this situation.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1493122030-32191-6-git-send-email-peter.maydell@linaro.org
ARM CPUs come in two flavours:
* proper MMU ("VMSA")
* only an MPU ("PMSA")
For PMSA, the MPU may be implemented, or not (in which case there
is default "always acts the same" behaviour, but it isn't guest
programmable).
QEMU is a bit confused about how we indicate this: we have an
ARM_FEATURE_MPU, but it's not clear whether this indicates
"PMSA, not VMSA" or "PMSA and MPU present" , and sometimes we
use it for one purpose and sometimes the other.
Currently trying to implement a PMSA-without-MPU core won't
work correctly because we turn off the ARM_FEATURE_MPU bit
and then a lot of things which should still exist get
turned off too.
As the first step in cleaning this up, rename the feature
bit to ARM_FEATURE_PMSA, which indicates a PMSA CPU (with
or without MPU).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1493122030-32191-5-git-send-email-peter.maydell@linaro.org
Make M profile use completely separate ARMMMUIdx values from
those that A profile CPUs use. This is a prelude to adding
support for the MPU and for v8M, which together will require
6 MMU indexes which don't map cleanly onto the A profile
uses:
non secure User
non secure Privileged
non secure Privileged, execution priority < 0
secure User
secure Privileged
secure Privileged, execution priority < 0
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1493122030-32191-4-git-send-email-peter.maydell@linaro.org
The M profile CPU's MPU has an awkward corner case which we
would like to implement with a different MMU index.
We can avoid having to bump the number of MMU modes ARM
uses, because some of our existing MMU indexes are only
used by non-M-profile CPUs, so we can borrow one.
To avoid that getting too confusing, clean up the code
to try to keep the two meanings of the index separate.
Instead of ARMMMUIdx enum values being identical to core QEMU
MMU index values, they are now the core index values with some
high bits set. Any particular CPU always uses the same high
bits (so eventually A profile cores and M profile cores will
use different bits). New functions arm_to_core_mmu_idx()
and core_to_arm_mmu_idx() convert between the two.
In general core index values are stored in 'int' types, and
ARM values are stored in ARMMMUIdx types.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1493122030-32191-3-git-send-email-peter.maydell@linaro.org
When identifying the DFSR format for an alignment fault, use
the mmu index that we are passed, rather than calling cpu_mmu_index()
to get the mmu index for the current CPU state. This doesn't actually
make any difference since the only cases where the current MMU index
differs from the index used for the load are the "unprivileged
load/store" instructions, and in that case the mmu index may
differ but the translation regime is the same (apart from the
"use from Hyp mode" case which is UNPREDICTABLE).
However it's the more logical thing to do.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1493122030-32191-2-git-send-email-peter.maydell@linaro.org
The PMUv3 driver of linux kernel (in arch/arm64/kernel/perf_event.c)
relies on the PMUVER field of id_aa64dfr0_el1 to decide if PMU support
is present or not. This patch clears the PMUVER field under TCG mode
when vPMU=off. Without it, PMUv3 will init insider guest VMs even
with vPMU=off. This patch also removes a redundant line inside the
if-statement.
Signed-off-by: Wei Huang <wei@redhat.com>
Message-id: 1495123889-32301-1-git-send-email-wei@redhat.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
The ReTurn from Exception (RTE) instruction loads the system register
(SR) with the saved system register (SSR). It has a delay slot, and
behaves specially according to the SH4 manual:
The SR value accessed by the instruction in the RTE delay slot is the
value restored from SSR by the RTE instruction. The SR and MD values
defined prior to RTE execution are used to fetch the instruction in
the RTE delay slot.
The instruction in the delay slot being often a NOP, it doesn't cause
any issue most of the time except in some rare cases where the NOP is
being splitted in a different TB (for example when the TCG op buffer
is full). In that case the NOP is fetched with the user permissions
and causes an instruction TLB protection violation exception.
This patches fixes that by introducing a new delay slot flag for the
RTE instruction. Given it's a privileged instruction, the RTE delay
slot instruction is always fetched in privileged mode. It is therefore
enough to to check for this flag in cpu_mmu_index.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Delay slots are indivisible, therefore avoid scheduling an interrupt in
the delay slot. However exceptions are possible.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
This will make easier the introduction of a new flag in the next
patches.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
When a masked exception happens, the SH4 CPU generates a non-masked
reset exception, which then jumps to the reset vector at address
0xA0000000. While this is emulated correctly in QEMU, this does not
work when using a kernel and initrd as this address then contain an
illegal instruction (and there is no guarantee the kernel and initrd
haven't been overwritten).
Therefore call qemu_system_reset_request to reload the kernel and initrd
and load the program counter to the kernel entry point.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
qemu_log_mask() is preferred over fprintf() for logging errors.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Assorted accumulated patches. These are nearly all bugfixes at one
level or another - some for longstanding problems, others for some
regressions caused by more recent cleanups.
This includes preliminary patches towards fixing migration for Radix
Page Table guests under POWER9 and also fixing some migration
regressions due to the re-organization of the interrupt controller
code. Not all the pieces are there yet, so those still won't quite
work, but the preliminary changes make sense on their own.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABCAAGBQJZJlRoAAoJEGw4ysog2bOS4m0P/0fm0k9znGQ8jpbGDJ18PF4g
Z7rhEcz5Ab1f5xn+ujYSc23ViJ0wgonhQB0F2d02O50Br0Gu2zN1XMrstysUEN/6
qg7nngsDqe+mGFMXASNb+YIzK4mYZQXmW8qscVm6fdaGXq/tZ13zMRPoRHdJQpsg
uN/uDWvQqwZO4RizKFbXlosoeNS1Q4c+Bm5MszV+B6TfVvgNd81Od7rjY/ucj4tr
9e8oG3lx1YpRjg6XN3uT/AEtPxgUe6hAS5RlsAWk/B0FBUK6JvRSaDAS8ojg8UIg
8cPWix5OrHQSpjcTsNW3X2FRb31O8YvExPYFHrVZeVhaB5HzVLPXEudeSIMiuqjn
CfZxRz6+IToWUJWFn30NozfJUwgQlJ2sf92CHcmMKHu2Zd/hUWdApIukmEFY43Y5
jyhDkubrRtSsCcR6wd4mGeAg2iQWubSOPFdM/TAGzlbGWoT4qXBK1Ol03DaiF971
fkxWaHrmgiKhe8G1sUIZXfDDxpTIvFv1bcmGOnhGmsELFh65bMXVLmwjNvVK9fdE
hTuWibRPPE3btyI4eOMbtVdooliCfp+0XvraACnuOXQlgD1bqCPSrnsS2HLPiDS+
npRKlHGlf4cYSVCeTCjmsAVIqzsDfyvpd67qP3xPsaX/pxI/i+I2H9usZWWJBXMp
I5M78EL5NCkMnZgYIFad
=nlnV
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'dgibson/tags/ppc-for-2.10-20170525' into staging
ppc patch queue 2017-05-25
Assorted accumulated patches. These are nearly all bugfixes at one
level or another - some for longstanding problems, others for some
regressions caused by more recent cleanups.
This includes preliminary patches towards fixing migration for Radix
Page Table guests under POWER9 and also fixing some migration
regressions due to the re-organization of the interrupt controller
code. Not all the pieces are there yet, so those still won't quite
work, but the preliminary changes make sense on their own.
# gpg: Signature made Thu 25 May 2017 04:50:00 AM BST
# gpg: using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* dgibson/tags/ppc-for-2.10-20170525:
xics: add unrealize handler
hw/ppc/spapr.c: recover pending LMB unplug info in spapr_lmb_release
hw/ppc: migrating the DRC state of hotplugged devices
hw/ppc: removing drc->detach_cb and drc->detach_cb_opaque
hw/ppc/spapr.c: adding pending_dimm_unplugs to sPAPRMachineState
spapr: add pre_plug function for memory
pseries: Restore support for total vcpus not a multiple of threads-per-core for old machine types
pseries: Split CAS PVR negotiation out into a separate function
spapr: fix error reporting in xics_system_init()
spapr_cpu_core: drop reference on ICP object during CPU realization
hw/ppc/spapr_events.c: removing 'exception' from sPAPREventLogEntry
spapr: ensure core_slot isn't NULL in spapr_core_unplug()
xics_kvm: cache already enabled vCPU ids
spapr: Consolidate HPT freeing code into a routine
spapr-cpu-core: release ICP object when realization fails
spapr: sanitize error handling in spapr_ics_create()
ppc/xics: simplify prototype of xics_spapr_init()
target/ppc: reset reservation in do_rfi()
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
For transitioning back to userspace after the interrupt.
Suggested-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Time to wire up all the call sites that request a shutdown or
reset to use the enum added in the previous patch.
It would have been less churn to keep the common case with no
arguments as meaning guest-triggered, and only modified the
host-triggered code paths, via a wrapper function, but then we'd
still have to audit that I didn't miss any host-triggered spots;
changing the signature forces us to double-check that I correctly
categorized all callers.
Since command line options can change whether a guest reset request
causes an actual reset vs. a shutdown, it's easy to also add the
information to reset requests.
Signed-off-by: Eric Blake <eblake@redhat.com>
Acked-by: David Gibson <david@gibson.dropbear.id.au> [ppc parts]
Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> [SPARC part]
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com> [s390x parts]
Message-Id: <20170515214114.15442-5-eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
The riccb is kept unchanged during initial cpu reset. Move the data
structure to the other registers that are unchanged.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Implement a basic infrastructure of handling channel I/O instruction
interception for passed through subchannels:
1. Branch the code path of instruction interception handling by
SubChannel type.
2. For a passed-through subchannel, issue the ORB to kernel to do ccw
translation and perform an I/O operation.
3. Assign different condition code based on the I/O result, or
trigger a program check.
Signed-off-by: Xiao Feng Ren <renxiaof@linux.vnet.ibm.com>
Signed-off-by: Dong Jia Shi <bjsdjshi@linux.vnet.ibm.com>
Message-Id: <20170517004813.58227-12-bjsdjshi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
We want to support real (i.e. not virtual) channel devices
even for guests that do not support MCSS-E (where guests may
see devices from any channel subsystem image at once). As all
virtio-ccw devices are in css 0xfe (and show up in the default
css 0 for guests not activating MCSS-E), we need an option to
squash both the virtio subchannels and e.g. passed-through
subchannels from their real css (0-3, or 0 for hosts not
activating MCSS-E) into the default css. This will be
exploited in a later patch.
Signed-off-by: Xiao Feng Ren <renxiaof@linux.vnet.ibm.com>
Signed-off-by: Dong Jia Shi <bjsdjshi@linux.vnet.ibm.com>
Message-Id: <20170517004813.58227-4-bjsdjshi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
cannot_instantiate_with_device_add_yet was introduced by commit
efec3dd631 to replace no_user. It was
supposed to be a temporary measure.
When it was introduced, we had 54
cannot_instantiate_with_device_add_yet=true lines in the code.
Today (3 years later) this number has not shrunk: we now have
57 cannot_instantiate_with_device_add_yet=true lines. I think it
is safe to say it is not a temporary measure, and we won't see
the flag go away soon.
Instead of a long field name that misleads people to believe it
is temporary, replace it a shorter and less misleading field:
user_creatable.
Except for code comments, changes were generated using the
following Coccinelle patch:
@@
expression DC;
@@
(
-DC->cannot_instantiate_with_device_add_yet = false;
+DC->user_creatable = true;
|
-DC->cannot_instantiate_with_device_add_yet = true;
+DC->user_creatable = false;
)
@@
typedef ObjectClass;
expression dc;
identifier class, data;
@@
static void device_class_init(ObjectClass *class, void *data)
{
...
dc->hotpluggable = true;
+dc->user_creatable = true;
...
}
@@
@@
struct DeviceClass {
...
-bool cannot_instantiate_with_device_add_yet;
+bool user_creatable;
...
}
@@
expression DC;
@@
(
-!DC->cannot_instantiate_with_device_add_yet
+DC->user_creatable
|
-DC->cannot_instantiate_with_device_add_yet
+!DC->user_creatable
)
Cc: Alistair Francis <alistair.francis@xilinx.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Marcel Apfelbaum <marcel@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Thomas Huth <thuth@redhat.com>
Acked-by: Alistair Francis <alistair.francis@xilinx.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Marcel Apfelbaum <marcel@redhat.com>
Acked-by: Marcel Apfelbaum <marcel@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20170503203604.31462-2-ehabkost@redhat.com>
[ehabkost: kept "TODO remove once we're there" comment]
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
This allows us to remove lots of includes of migration/migration.h
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
-----BEGIN PGP SIGNATURE-----
iQJJBAABCgAzFiEEd0YmQqnvlP0Pdxltupx4Bh3djJsFAlkW0RUVHGF1cmVsaWVu
QGF1cmVsMzIubmV0AAoJELqceAYd3Yyb8CwP/ikf5zJX2M0Zb98VWrF9/4xYI6sM
fs+8DKr4A400Ya2aw6LrhI0LEk+Vhp0TFBXq1saSnB288R9dxhlgyeoN4tLoDdms
BVjvVZjGWLWPi4oyfYqdHlhN2D6H0jiv/yDABOGq7gvc4JZpWtQq94XwjtoFFGHj
IG3GsxfPvS1B3w9Hl9QIJAQCuQLC9UgynSob5er3Gr/ReEZtVjhb0WlKsGqaeAyO
p6iyBmMWOhbqjdJ3AVK6raecoLLIGV3qqDV0jX7i4i3aagA9f2GgGSKNPW/pYUcp
4RrrCCg2aBKVGoqCU5B7nOpd9c61aS/9IbgBwcnT/CXwERe+afPkj4dE5gMjFAJI
Kn8d5tuHb96TPLmZfOKTZYzgEmh5WmLCGXgNRWRjI4AngxruvqnyqyRwRU9zxZ1d
9bZrIl8eIm8OO7j49Ne+gF9Cz99rIPle4Bgj/nA1Asu+w6Xydn9gw0jmpslkU8cv
s0cwxOmg5Ai00hRkylaww1orqppG5e0SBu7vAzIm6+5BwcpBVEPATO77xTp3jANV
GyPLe1yIZtKWeLb9YEIyy3Cm5V7918eXOZsSz5be0wjJA5ckH1Z4lS0kE+y1zOqT
meyAkWzir8wUmYDidjyoEhcG6rpkgexlNiaQE0gbNYKpF/ddWfPgl2EBWWUnnH1G
jj9sugXWT/RdMpjz
=tHAY
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'aurel32/tags/pull-target-sh4-20170513' into staging
Queued target/sh4 patches
# gpg: Signature made Sat 13 May 2017 10:25:41 AM BST
# gpg: using RSA key 0xBA9C78061DDD8C9B
# gpg: Good signature from "Aurelien Jarno <aurelien@aurel32.net>"
# gpg: aka "Aurelien Jarno <aurelien@jarno.fr>"
# gpg: aka "Aurelien Jarno <aurel32@debian.org>"
# Primary key fingerprint: 7746 2642 A9EF 94FD 0F77 196D BA9C 7806 1DDD 8C9B
* aurel32/tags/pull-target-sh4-20170513:
target/sh4: use cpu_loop_exit_restore
target/sh4: trap unaligned accesses
target/sh4: movua.l is an SH4-A only instruction
target/sh4: implement tas.b using atomic helper
target/sh4: generate fences for SH4
target/sh4: optimize gen_write_sr using extract op
target/sh4: optimize gen_store_fpr64
target/sh4: fold ctx->bstate = BS_BRANCH into gen_conditional_jump
target/sh4: only save flags state at the end of the TB
target/sh4: fix BS_EXCP exit
target/sh4: fix BS_STOP exit
target/sh4: move DELAY_SLOT_TRUE flag into a separate global
target/sh4: do not include DELAY_SLOT_TRUE in the TB state
target/sh4: get rid of DELAY_SLOT_CLEARME
target/sh4: split ctx->flags into ctx->tbflags and ctx->envflags
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Highlights:
* New "-numa cpu" option
* NUMA distance configuration
* migration/i386 vmstatification
-----BEGIN PGP SIGNATURE-----
iQIcBAABCAAGBQJZFLh3AAoJECgHk2+YTcWmppQQAJe9Y5a3VwHXqvHbwBHX2ysn
RZDAUPd9DWpbM+UUydyKOVIZ7u5RXbbVq4E0NeCD8VYYd+grZB5Wo1cAzy3b4U2j
2s+MDqaPMtZtGoqxTsyQOVoVxazT5Kf1zglK+iUEzik44J7LGdro+ty2Z7Ut2c11
q9rE/GNS78czBm7c4lxgkxXW4N95K/tEGlLtDQ7uct//3U/ZimF+mO6GcbVFlOWT
4iEbOz2sqvBVv22nLJRufiPgFNIW4hizAz5KBWxwGFCCKvT3N6yYNKKjzEpCw+jE
lpjIRODU02yIZZZY841fLRtyrk7p4zORS8jRaHTdEJgb5bGc/YazxxVL8nzRQT1W
VxFwAMd+UNrDkV24hpN++Ln2O+b3kwcGZ7uA/qu9d5WvSYUKXlHqcMJ35q6zuhAI
/ecfYO7EZfVP86VjIt5IH04iV8RChA9Q6de+kQEFa6wHUxufeCOwCFqukGo8zj07
plX8NcjnzYmSXKnYjHOHao4rKT+DiJhRB60rFiMeKP/qvKbZPjtgsIeonhHm53qZ
/QwkhowahHKkpAnetIl0QHm8KS4YudAofMi/Fl+he4gRkEbSQVAo6iQb2L4cjcLC
LNSDDsIVWGem4gCR+vcsFqB3lggRDfltHXm15JKh92UMpOr6RI6s8pD55T7EdnPC
CfdxWB5kYM6/lLbOHj94
=48wH
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'ehabkost/tags/x86-and-machine-pull-request' into staging
x86 and machine queue, 2017-05-11
Highlights:
* New "-numa cpu" option
* NUMA distance configuration
* migration/i386 vmstatification
# gpg: Signature made Thu 11 May 2017 08:16:07 PM BST
# gpg: using RSA key 0x2807936F984DC5A6
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>"
# gpg: Note: This key has expired!
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6
* ehabkost/tags/x86-and-machine-pull-request: (29 commits)
migration/i386: Remove support for pre-0.12 formats
vmstatification: i386 FPReg
migration/i386: Remove old non-softfloat 64bit FP support
tests: check -numa node,cpu=props_list usecase
numa: add '-numa cpu,...' option for property based node mapping
numa: remove node_cpu bitmaps as they are no longer used
numa: use possible_cpus for not mapped CPUs check
machine: call machine init from wrapper
numa: remove no longer need numa_post_machine_init()
tests: numa: add case for QMP command query-cpus
QMP: include CpuInstanceProperties into query_cpus output output
virt-arm: get numa node mapping from possible_cpus instead of numa_get_node_for_cpu()
spapr: get numa node mapping from possible_cpus instead of numa_get_node_for_cpu()
pc: get numa node mapping from possible_cpus instead of numa_get_node_for_cpu()
numa: do default mapping based on possible_cpus instead of node_cpu bitmaps
numa: mirror cpu to node mapping in MachineState::possible_cpus
numa: add check that board supports cpu_index to node mapping
virt-arm: add node-id property to CPU
pc: add node-id property to CPU
spapr: add node-id property to sPAPR core
...
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Use cpu_loop_exit_restore when using cpu_restore_state and cpu_loop_exit
together.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
SH4 requires that memory accesses are naturally aligned, except for the
SH4-A movua.l instructions which can do unaligned loads.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
At the same time change the comment describing the instruction the same
way than other instruction, so that the code is easier to read and search.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
We only emulate UP SH4, however as the tas.b instruction is used in the GNU
libc, this improve linux-user emulation.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
synco is a SH4-A only instruction.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
This doesn't change the generated code on x86, but optimizes it on most
RISC architectures and makes the code simpler to read.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Using extr and avoiding intermediate temps.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
There is no need to save flags when entering and exiting the delay slot.
They can be saved only when reaching the end of the TB. If the TB is
interrupted before by an exception, they will be restored using
restore_state_to_opc.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
In case of exception, there is no need to call tcg_gen_exit_tb as the
exception helper won't return.
Also fix a few cases where BS_BRANCH is called instead of BS_EXCP.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
When stopping the translation because the state has changed, goto_tb
should not be used as it might link TB with different flags.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Instead of using one bit of the env flags to store the condition of the
next delay slot, use a separate global. It simplifies reading and
writing the flags variable and also removes some confusion between
ctx->envflags and env->flags.
Note that the global is first transfered to a temp in order to be
able to discard the global before the brcond.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
DELAY_SLOT_TRUE is used as a dynamic condition for the branch after the
delay slot instruction. It is not used in code generation, so there is
no need to including in the TB state.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Now that ctx->flags has been split, it becomes clear that
DELAY_SLOT_CLEARME has not impact on the code generation: in both case
ctx->envflags is cleared, either by clearing all the flags, or by
setting it to 0. This is left-over from pre-TCG era.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
There is a confusion (and not only in the SH4 target) between tb->flags,
env->flags and ctx->flags. To avoid it, split ctx->flags into
ctx->tbflags and ctx->envflags. ctx->tbflags stays unchanged during the
whole TB translation, while ctx->envflags evolves and is kept in sync
with env->flags using TCG instructions. ctx->envflags now only contains
the part that of env->flags that is contained in the TB state, i.e. the
DELAY_SLOT* flags.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
The SIGNAL PROCESSOR helper returns its value through the CC register.
set_cc_static should be called just after the helper.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <20170509082800.10756-3-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
For that move the definition from kvm.c to cpu.h
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <20170509082800.10756-2-aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Eric Bischoff <ebischoff@nerim.net>
Message-Id: <20170228120134.7921-1-ebischoff@suse.com>
[rth: Combine the two via insn->data; free the address temps.]
Signed-off-by: Richard Henderson <rth@twiddle.net>
All of the interlocked access facility instructions raise a
specification exception for unaligned accesses. Do this by
using the (previously unused) unaligned_access hook.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Linux arch/s390/kernel/head(64).S uses LPP instruction if it is
available in facilities list provided by stfl/stfle instruction.
This is the case of newer z/System generations and their qemu
definition.
The description of LPP is at
http://www-01.ibm.com/support/docview.wss?uid=isg26fcd1cc32246f4c8852574ce0044734a
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Miroslav Benes <mbenes@suse.cz>
Message-Id: <20170227085353.20787-1-mbenes@suse.cz>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Remove support for versions of the CPU state prior to 11
which is the version used in qemu 0.12 - you'd be pretty
lucky if you got a migration stream to work from anything
that old anyway. This doesn't affect the machine type
definition in any way.
My main reason for doing this is the hack for sysenter_esp/eip
that uses .get/.put's in state versions less than 7 (that's
prior to somewhere before 0.10).
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20170405190024.27581-4-dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Convert the fpreg save/restore to use VMSTATE_ macros rather than
.get/.put.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20170405190024.27581-3-dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Long long ago, we used to support storing the x86 FP registers in
a 64bit format.
Then c31da136a0 in v0.14-rc0 removed
the last support for writing that in the migration format.
Even before that, it was only used if you had softfloat disabled
(i.e. !USE_X86LDOUBLE) so in practice use of it in even earlier
qemu is unlikely for most users.
Kill it off, it's complicated, and possibly broken.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20170405190024.27581-2-dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
it will allow switching from cpu_index to property based
numa mapping in follow up patches.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Message-Id: <1494415802-227633-5-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
it will allow switching from cpu_index to property based
numa mapping in follow up patches.
PS:
patch changes default value of CPUState::numa_node from 0
to CPU_UNSET_NUMA_NODE_ID. The only place for x86 that
would affected is monitor's 'infor numa' command which
uses that field. However legacy 0 value is still preserved
by pc_cpu_pre_plug() in this patch if user/numa.c hasn't
set it explicitly, so there is no change in behavior.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <1494415802-227633-4-git-send-email-imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <1493816238-33120-3-git-send-email-imammedo@redhat.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Change the nested if statements into a flat format, to make
it clearer what validation / capping is being performed on
different CPUID index values.
NB this changes behaviour when "index > env->cpuid_xlevel2".
This won't have any guest-visible effect because no there is
no CPUID[0xC0000001] feature supported by TCG, and KVM code
will never call cpu_x86_cpuid() with such an index value.
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Message-Id: <20170509132736.10071-2-berrange@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
When running with KVM, we update the "family" CPU alias to point
to the right host CPU type, so that it for example possible to
use "-cpu POWER8" on a POWER8NVL host. However, the function for
printing the list of available CPU models is called earlier than
the KVM setup code, so the output of "-cpu help" is wrong in that
case. Since it would be somewhat ugly anyway to have different
help texts depending on whether "-enable-kvm" has been specified
or not, we should better always print the same text, so fix this
issue by printing "alias for preferred XXX CPU" instead.
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
POWER9 DD1 silicon has some bugs which mean it a) isn't really compliant
with the ISA v3.00 and b) require a number of special workarounds in the
kernel.
At the moment, qemu isn't aware of DD1. For TCG we don't really want it to
be (why bother emulating buggy silicon). But with KVM, the guest does need
to be aware of DD1 so it can apply the necessary workarounds.
Meanwhile, the feature negotiation between qemu and the guest strongly
favours architected compatibility modes to "raw" CPU modes. In combination
with the above, this means the guest sees architected POWER9 mode, and
doesn't apply the DD1 workarounds. Well, unless it has yet another
workaround to partially ignore what qemu tells it.
This patch addresses this by disabling support for compatibility modes when
using KVM on a POWER9 DD1 host.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
ISA V3.00 introduced a new radix mmu model. Implement the page fault
handler for this so we can run a tcg guest in radix mode and perform
address translation correctly.
In real mode (mmu turned off) addresses are masked to remove the top
4 bits and then are subject to partition scoped translation, since we only
support pseries at this stage it is only necessary to perform the masking
and then we're done.
In virtual mode (mmu turned on) address translation if performed as
follows:
1. Use the quadrant to determine the fully qualified address.
The fully qualified address is defined as the combination of the effective
address, the effective logical partition id (LPID) and the effective
process id (PID). Based on the quadrant (EA63:62) we set the pid and lpid
like so:
quadrant 0: lpid = LPIDR, pid = PIDR
quadrant 1: HV only (not allowed in pseries)
quadrant 2: HV only (not allowed in pseries)
quadrant 3: lpid = LPIDR, pid = 0
If we can't get the fully qualified address we raise a segment interrupt.
2. Find the guest radix tree
We ask the virtual hypervisor for the partition table which was registered
with H_REGISTER_PROC_TBL which points us to the process table in guest
memory. We then index this table by pid to get the process table entry
which points us to the appropriate radix tree to translate the address.
If the process table isn't big enough to contain an entry for the current
pid then we raise a storage interrupt.
3. Walk the radix tree
Next we walk the radix tree where each level is a table of page directory
entries indexed by some number of bits from the effective address, where
the number of bits is determined by the table size. We continue to walk
the tree (while entries are valid and the table is of minimum size) until
we reach a table of page table entries, indicated by having the leaf bit
set. The appropriate pte is then checked for sufficient access permissions,
the reference and change bits are updated and the real address is
calculated from the real page number bits of the pte and the low bits of
the effective address.
If we can't find an entry or can't access the entry bacause of permissions
then we raise a storage interrupt.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
[dwg: Add missing parentheses to macro]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The tlbie[l] instructions are used to invalidate TLB entries used to cache
address translations.
In ISAv3.00 (POWER9) more fields were added to the tblie[l] instructions
which were previously invalid. We don't care about any of these new fields
since we just invalidate the whole world anyway but we need to not
cause an illegal instruction exception when the instructions are called.
We also don't want to allow an older processor to have these fields set
since that would be invalid.
Add a new GEN_HANDLER for the ISAv3 instructions with the correct invalid
mask. These will only be generated to a POWER9 processor for now based on
the instruction flag. Also remove the PPC_MEM_TLBIE instruction flag from
the POWER9 processor definition to ensure the old tlbie isn't generated.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The Guest Translation Shootdown Enable (GTSE) bit in the Logical Partition
Control Register (LPCR) can be set to enable a guest to use the tlbie
instruction directly to invalidate translations.
When the GTSE bit is set then the tlbie instruction is supervisor
privileged, otherwise it is hypervisor privileged.
Add a guest translation shootdown enable (gtse) field to the diassembly
context and use this to check the correct privilege level at code
generation time.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
In case when atomic operation is not supported, exit_atomic is called
and we stop the world and execute the atomic operation. This results
in a following call chain:
tcg_gen_atomic_cmpxchg_tl()
-> gen_helper_exit_atomic()
-> HELPER(exit_atomic)
-> cpu_loop_exit_atomic() -> EXCP_ATOMIC
-> qemu_tcg_cpu_thread_fn() => case EXCP_ATOMIC
-> cpu_exec_step_atomic()
-> cpu_step_atomic()
-> cc->cpu_exec_enter() = ppc_cpu_exec_enter()
Sets env->reserve_addr = -1;
But by the time it return back, the reservation is erased and the code
fails, this continues forever and the lock is never taken.
Instead set this in powerpc_excp()
Now that ppc_cpu_exec_enter() doesn't have anything meaningful to do,
let us get rid of the function.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This enables the multi-threaded system emulation by default for PPC64
guests using the x86_64 TCG back-end.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Emulating LL/SC with cmpxchg is not correct, since it can suffer from
the ABA problem. However, portable parallel code is written assuming
only cmpxchg which means that in practice this is a viable alternative.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We now have macros in place to make it less verbose to add a scalar
to QDict and QList, so use them.
Patch created mechanically via:
spatch --sp-file scripts/coccinelle/qobject.cocci \
--macro-file scripts/cocci-macro-file.h --dir . --in-place
then touched up manually to fix a couple of '?:' back to original
spacing, as well as avoiding a long line in monitor.c.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20170427215821.19397-7-eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
but it'll come in the next pull request.
* use GDB XML register description for x86
* use _Static_assert in QEMU_BUILD_BUG_ON
* add "R:" to MAINTAINERS and get_maintainers
* checkpatch improvements
* dump threading fixes
* first part of vhost-user-scsi support
* QemuMutex tracing
* vmw_pvscsi and megasas fixes
* sgabios module update
* use Rev3 (ACPI 2.0) FADT
* deprecate -hdachs
* improve -accel documentation
* hax fix
* qemu-char GSource bugfix
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQEcBAABAgAGBQJZDE+gAAoJEL/70l94x66DIpYH/1IOz3u8ObD8D4Lor07LkCCZ
vWFnTBMgGi9gTL5JQDnukRR3cmNp9EVOtAP5Yf+v+/Xqyq/FNGnoVWxCxEby7LtN
zrIXbsKMCaEcGzRNJFcbKV+KZnzkJrz92J0NHy29ruCK1AsslOXAWf4Qb1MV+fQl
6w2Upsh35usvWCNpFm2o8arzMEmNuE2xJDPKUB11GMrZT6TExq4Zqa8Zj1Ihc0sX
XcDr+eeBmb65Vv3jQLntOhSWAy0Xxf/fDXYTQx+JLHFgvpSOIWMiS+fqIVXtT0bH
0E4hQrBr0qjes8n8+9WGGQW2k8Ak0QlDvrZnQ97hTeV1k6SxW+2ATO2mLeJp9TM=
=5hf2
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'bonzini/tags/for-upstream' into staging
A large set of small patches. I have not included yet vhost-user-scsi,
but it'll come in the next pull request.
* use GDB XML register description for x86
* use _Static_assert in QEMU_BUILD_BUG_ON
* add "R:" to MAINTAINERS and get_maintainers
* checkpatch improvements
* dump threading fixes
* first part of vhost-user-scsi support
* QemuMutex tracing
* vmw_pvscsi and megasas fixes
* sgabios module update
* use Rev3 (ACPI 2.0) FADT
* deprecate -hdachs
* improve -accel documentation
* hax fix
* qemu-char GSource bugfix
# gpg: Signature made Fri 05 May 2017 06:10:40 AM EDT
# gpg: using RSA key 0xBFFBD25F78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>"
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* bonzini/tags/for-upstream: (21 commits)
vhost-scsi: create a vhost-scsi-common abstraction
libvhost-user: replace vasprintf() to fix build
get_maintainer: add subsystem to reviewer output
get_maintainer: --r (list reviewer) is on by default
get_maintainer: it's '--pattern-depth', not '-pattern-depth'
get_maintainer: Teach get_maintainer.pl about the new "R:" tag
MAINTAINERS: Add "R:" tag for self-appointed reviewers
Fix the -accel parameter and the documentation for 'hax'
dump: Acquire BQL around vm_start() in dump thread
hax: Fix memory mapping de-duplication logic
checkpatch: Disallow glib asserts in main code
trace: add qemu mutex lock and unlock trace events
vmw_pvscsi: check message ring page count at initialisation
sgabios: update for "fix wrong video attrs for int 10h,ah==13h"
scsi: avoid an off-by-one error in megasas_mmio_write
vl: deprecate the "-hdachs" option
use _Static_assert in QEMU_BUILD_BUG_ON
target/i386: Add GDB XML register description support
char: Fix removing wrong GSource that be found by fd_in_tag
hw/i386: Build-time assertion on pc/q35 reset register being identical.
...
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJZCnjJAAoJEMOzHC1eZifkjegP/ieGXCOfRQxLkr7SzWvigjWx
HYUCqZ55Qt2nlirKV30Y7sSxuWnHKthgAvp/9/E/d1UGuFjvao9iia0Aet3elh4B
bq8BM3LTBTnwknHc2tgHaNyP7VHsNZkCmMsESSEO6NexjnmXIoWxqbdAQ8FdYXpN
evzz2pa1uTLju1gu7gDe3gIUBJjqiIOTmsjIkzIj7v9IqHOYKdGlJQSnZ+AHbQJn
nRs+uqxN8sKaAILHmteXTEL1v1xhMJGKSY212m0OnUImlJrNgjAFGHKjSD4p8+6h
/k4msQXCjdNo5NKu/0S3N8MKYaWTdcHohe4fnevV2fgdUpljLLm0RBNwP0wWi8Gp
SZZ4GgeKGioCuqew1OdrhUNEQ+je3o4wdNYH243vVx3AIxXKS/EVIYhjNqDQLJ9M
HGD+zcjcplpUlZ9dOXgWXK6yff2GUORPZJw8BLnDeRxjJA0xTefaK3qA5gWqJXrY
HahUi0G4fJNZeROaBemcQ4+nPXfz55Ti4jp4Y3l5QqzvRidSZkdEoRfrnyMYP3/C
6RmR/iRQLjEGStKEqeqGMqhJ9Gn2aAkU+l+h4394fzS6CQulPOFZEkjobcAd2/5O
lxXilhQOrAVlW8OIQzuGfIbuLdSFh55vurq8bwrMi8leeJ/AIbColun8PnO5E6Zd
+1m4x+gT7IIv4QfMoerL
=zXGN
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'shorne/tags/pull-or-20170504' into staging
Openrisc Features and Fixes for qemu 2.10
# gpg: Signature made Thu 04 May 2017 01:41:45 AM BST
# gpg: using RSA key 0xC3B31C2D5E6627E4
# gpg: Good signature from "Stafford Horne <shorne@gmail.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: D9C4 7354 AEF8 6C10 3A25 EFF1 C3B3 1C2D 5E66 27E4
* shorne/tags/pull-or-20170504:
target/openrisc: Support non-busy idle state using PMR SPR
target/openrisc: Remove duplicate features property
target/openrisc: Implement full vmstate serialization
migration: Add VMSTATE_STRUCT_2DARRAY()
target/openrisc: implement shadow registers
migration: Add VMSTATE_UINTTL_2DARRAY()
target/openrisc: add numcores and coreid support
target/openrisc: Fixes for memory debugging
target/openrisc: Implement EPH bit
target/openrisc: Implement EVBAR register
MAINTAINERS: Add myself as openrisc maintainer
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
hax_update_mapping() avoids unnecessary and potentially expensive
calls to HAX_VM_IOCTL_SET_RAM by computing the net result (i.e.
effective mapping changes) of each MemoryRegion transaction, with
the help of a linked list of HAXMapping objects.
However, when processing a new mapping that overlaps with an
existing mapping in the list, it fails to handle the case where the
start address of the new mapping is above that of the existing
mapping in the guest physical address space. This happens when QEMU
is launched with "-machine q35 -enable-hax", which involves the
following MemoryRegion transaction for digging the VGA hole:
region_del: 0x00000000->0x08000000 VA 05fa0000 ('pc.ram')
region_add: 0x00000000->0x000a0000 VA 05fa0000 ('pc.ram')
region_add: 0x000a0000->0x000c0000 VA 00000000 ('vga-lowmem')
region_add: 0x000c0000->0x08000000 VA 06060000 ('pc.ram')
where the third MemoryRegion is MMIO and is ignored. The current
de-duplication logic handles the last MemoryRegion incorrectly and
produces the following result:
hax_mapping_dump_list updates:
+ 0x000c0000->0x08000000 VA 0x06060000
- 0x07fe0000->0x08000000 VA 0x0df80000
which is why VGA emulation does not work for Q35.
With this patch, one can see VGA output as Q35 boots up. Note that
Q35 support also requires a change to HAXM kernel module, which is
not available in the current HAXM release (6.1.2).
+ Add a warning if the input MemoryRegion is a ROM device, which is
not supported by HAXM kernel module at this time.
Signed-off-by: Yu Ning <yu.ning@linux.intel.com>
Message-Id: <20170428072723.7036-1-yu.ning@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch implements XML target description support for X86 and X86-64
architectures in the GDB stub, as the way with ARM and PowerPC:
- gdb-xml/32bit-core.xml & gdb-xml/64bit-core.xml: Adding the XML target
description files, these files are picked from GDB source code.
- configure: Define gdb_xml_files for X86 targets.
- target/i386/cpu.c: Define gdb_core_xml_file and gdb_arch_name to add
XML awareness for this architecture, modify the gdb_num_core_regs to
fit the registers number defined in each XML file.
Signed-off-by: Abdallah Bouassida <abdallah.bouassida@lauterbach.com>
Message-Id: <2b3c8119-1602-28c7-eab4-296593877103@lauterbach.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The OpenRISC architecture has the Power Management Register (PMR)
special purpose register to manage cpu power states. The interesting
modes are:
* Doze Mode (DME) - Stop cpu except timer & pic - wake on interrupt
* Sleep Mode (SME) - Stop cpu and all units - wake on interrupt
* Suspend Model (SUME) - Stop cpu and all units - wake on reset
The linux kernel will set DME when idle.
This patch implements the PMR SPR and halts the qemu cpu when there is a
change to DME or SME. This means that openrisc qemu in no longer peggs
a host cpu at 100%.
In order for this to work we need to kick the CPU when timers are
expired. Update the cpu timer to kick the cpu upon each timer event.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Stafford Horne <shorne@gmail.com>
The features property has stored the exact same thing as the cpucfgr
spr. Remove the feature enum and property as it is not needed.
In order to preserve the behavior or keeping features accross reset this
patch moves cpucfgr into the non reset region of the state struct. Since
the cpucfgr is read only this means we only need to sset cpucfgr once
during class init.
Signed-off-by: Stafford Horne <shorne@gmail.com>
Previously serialization did not persist the tlb, timer, pic and other
key state items. This meant snapshotting and restoring a running os
would crash. After adding these I am able to take snapshots of a
running linux os and restore at a later time.
I am currently not trying to maintain capatibility with older versions
as I do not believe this really worked before or anyone used it.
Signed-off-by: Stafford Horne <shorne@gmail.com>
Shadow registers are part of the openrisc spec along with sr[cid], as
part of the fast context switching feature. When exceptions occur,
instead of having to save registers to the stack if enabled the CID will
increment and a new set of registers will be available.
This patch only implements shadow registers which can be used as extra
scratch registers via the mfspr and mtspr if required. This is
implemented in a way where it would be easy to add on the fast context
switching, currently cid is hardcoded to 0.
This is need for openrisc linux smp kernels to boot correctly.
Signed-off-by: Stafford Horne <shorne@gmail.com>
These are used to identify the processor in SMP system. Their
definition has been defined in verilog cores but it not yet part of the
spec but it will be soon.
The proposal for this is available:
https://openrisc.io/proposals/core-identifier-and-number-of-cores
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Stafford Horne <shorne@gmail.com>